I have a long running stored procedure that is executed from IIS. On average this stored procedure takes between two and five minutes to complete because it is searching through a large dataset. (although it has take around 20 minutes in some cases)
Most of the time the stored procedure works fine but every now and then the SPIDS go into a sleeping state and never recover. The only solution I have found is to restart the SQL Server and re-run the stored procedure
The are no table inserts in the proc (only table variable inserts), and the other statements are selects on a large table.
I'm stuck for where to start debugging this issue. Any hints one what it might be or suggestions on tools that would help me find the issue would be most helpful
EDIT: More info added:
The actual issue is the proc doesn't return the resultset. My first thought was to look at the spids, they were sleeping but the cputime was still increasing
It's a .Net app so .Net Core 3.1 with ASP.NET Core and a Blazor UI. The libary used for db connection is System.data.SqlClient I believe System.data.SqlClient uses it's own custom driver. Calling code below:
The stored procedure doesn't return multiple result sets, however obviously different instances of the proc run at the same time.
No limits to connection pooling in IIS
#RichardWatts when you say " re-run the stored procedure" you mean that the same stored proc with the same parameter and data works once you restart SQL Server ?
If so look over your loc (sp_loc} inside your table probably another process loc some data and doesnt release it properly, specialy if you have transaction accessing the same tables.
What is your your isolation level on your connexion ? If you can, try to change it to READ UNCOMMITTED to see if that solve your problem.
as an alternate you can also add a WITH (NOLOCK) or (READUNCOMMITTED) to your sql command.
Know that you will need to hold query with a read uncommited or nolock if you have some modification on the structure of your table or index re construction for example or they will in turn block its execution
Nevertheless be cautious this solution depend on your environment, specially if your tables gots lots of update, delete, insert,... this kind of isolation can lead to a Dirty read and doesnt adress the root cause of your problem wich I would bet is uncomited transaction (good article that explain it)
Make also a DBCC CHECKTABLE just to be sure on this side
Related
Environment:
ASP.NET MVC 5.2.3.0
SQL Server 2014 (v12.0.2000.8)
Entity Framework 6
hosted on Azure
We have one page that gets data from the database using a stored procedure.
Lately we’ve noticed that some time this page loads about 20 secs. So we started to investigate the problem. I’ve tried to execute this stored procedure directly from Management Studio and it took 150 ms+-:
So next thing I did is create a console application that connects to the Azure SQL database and executes this stored procedure:
I've also tried to use SqlQuery from EF 6:
Same thing.
Important thing: this is not permanent problem. Sometimes this problem occurs, sometimes it works just fine - 50/50.
I've checked the database load in the Azure portal - it is about 50% dtu usage (during this performance issue). But I don’t think this is related to database load because it executes fast from Management Studio.
Currently I have no idea what is the problem so I need help. I would like to notice that a lot of employees use this page (that executes the stored procedure) all the time. Maybe this is somehow related to problem.
So question: why does it take so long to execute this stored procedure using ado.net / EF?
Do some debugging.
Potential culprits include, mostly:
Database side locking that is not released fast, making a SP waiting.
Parameter sniffing where a query path is not optimal for a specific set of parameters (which may lead to locking blocking you). This is a SP problem - someone does not write proper SQL for cases like that.
The info you give is irrelevant. See... SP's are NOT EXECUTED IN EF6 - EF6 fowrards them to ADO.NET which sends them to the database. As you say they work slow, IN THE DATABASE, any C# level debugging is as useless as the menu from my local Pizzeria for this particular question. YOu have to go down and debug and analyze what happens on the database.
The SSMS screenshot you provide is totally useless - you need to run the SP in SSMS, for a case it happens, and then use.... the query plan and proper nalaysis traces to see what happens.
I have few tables (base tables) which are getting inserted and updated twice a week. I have indexes created on these tables long back.
I'm applying logic on top of these tables in a stored procedure (without any parameter) and creating a final output table.
I'm scheduling this stored procedure twice a week using SQL Server agent job.
It is running slowly now (50 minutes) whereas if I run the stored procedure manually, it is running faster (15 - 18 minutes)
Do I have to drop the indexes whenever insert or update is happening in base tables and recreate it again after the insert or update?
If so do I have to do it every week?
What is its effect in performance of SQL Server agent jobs?
Indexes do require maintenance, but the rate at which they do depends entirely on how much data is changed, and how those changes are ordered. You can google around for any number of scripts to check your index fragmentation, and how to defragment them. Usually even for larger databases, weekly or nightly maintenances are more than enough.
Anyway, the fact that the execution time differs depending on how you run it, points to two possible causes:
Parametrization, or the SET properties used by the connection.
If your procedure uses parameters but you run the script manually giving the parameters values as you do, then SQL Server knows exactly which values you're using, and can optimize the query execution to use the correct indexes etc on the spot. If your agent calls the procedure with the same parameters, then the process is different. SQL Server may not know which values are being used, so it has to use covering indexes or worse yet, even full on table scans (reading all the data in the whole table, rendering indexes useless) to make sure that it will find all the relevant data for the query. Google SQL Server parametrization, and you can find out more.
The set properties on the other hand control specific session properties that are applied automatically when you connect directly to the database via Management Studio. But when you use an agent job, that may not be the case. This can also result in a different plan which will take far more time.
Both these cases, depend on your database settings and the way your procedure works. So we have to guess here.
But typically, you need to set the following properties in the beginning of a script in an agent job to match the session properties used in your regular Management Studio session:
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
All of the terms here can be googled. I suggest you do so. Those articles can explain these things far better than I've the time for, especially given that - no disrespect intended - you're relatively new to SQL Server. So explaining these things with a suitable terminology here, is difficult. :)
I am experiencing a problem whereby executing a stored procedure to update a record in a table is resulting in a timeout error. I originally call my stored procedure from within an MVC3 application, which is where I first noticed the error.
However, using the SQL Server Profiler, I was able to copy the code that was generated by ADO.NET and run this directly on the database. I let this query run for approximately 5 minutes and it still didn't manage to return anything.
Here are a few facts:
The stored procedure has approximately 100 arguments that are being passed to it.
My MVC Application, SSMS and Sql Server 2008 are all installed on the same machine.
The stored procedure attempts to update a single row in a table containing about 5000 entries.
There was a trigger that would update the LastModifiedDate and CreatedDate, but I removed these triggers, and updated the EDMX to determine if there was an infinite loop caused by these triggers.
Our live server runs exactly the same stored procedure (using classic asp) as the one I am trying to run and achieves the correct result. Furthermore, the live server fails to run the same stored procedure under .NET
My machine fails to run the stored procedure for both the classic ASP and the ASP.NET
The stored procedure seems to fail only for a few of the rows, and others work perfectly fine.
I have tried changing the values of the parameters that are passed into the stored procedure
Other stored procedures work fine
There appears to be a lock on the particular table that the stored procedure was attempting to update, since executing other queries worked fine, even when waiting for this one to execute.
If anyone has any ideas on any other tests I could perform,or any tools I could use to determine the root cause of the timeout error.
Thanks.
P.S. Don't tell me to change the command timeout property, I have tried setting this to zero!
I can think of two things :
I assume you have already implemented exception handling in the stored procedure. If not, please do so, and try to get hold of the problem statement first. When you say, it happens for some rows, it might be due to bad data ? Read this for information on how to do exception handling in SQL Server 2008
Have your tried finding out whether there is a dead-lock ?
Please read this for detailed procedure and understanding.
I have a problem with this one stored procedure that works 99% of the time throughout our application, but will time out when called from a particular part of the application.
The table only has 3 columns and contains about 300 records. The stored proc will only bring back one record and looks like this
"Select * from Table Where Column = #parameter"
When the sp is executed in management studio it takes :00 seconds.
The stored procedure is used a lot in our application, but only seems to time out in one particular part of our program. I can't think of any reason why such a simple sp would time out. Any ideas?
This is a vb.net desktop application and using sql server 2005.
You've got some code that's already holding a lock on the table so it can't be read.
try
SELECT * FROM Table WITH (NOLOCK) WHERE Column = #parameter
We had a very similar problem, we had several stored procedures that would keep timing out in the application (~30 sec), but run fine in SSMS.
The short term solution that we used was to re-run the stored procedures which fixed the problem temporarily. If this also fixes the problem temporarily for you, then you should investigate parameter sniffing problems.
For futher information see http://dannykendrick.blogspot.co.nz/2012/08/sql-parameter-sniffing.html
you need to get performance metrics. Use the sql profiler to confirm that the SP is slow at that time or something else. If it is the sql that's slow at that time - consider things like locks that may be forcing your query to wait. Lets us know and we might be able to give more specific information at that point.
If it not the SP but say the VB code, a decent profile like RedGate's Ants or JetBrains' DotTrace may help.
We are experiencing some very annoying deadlock situations in a production SQL Server 2000 database.
The main setup is the following:
SQL Server 2000 Enterprise Edition.
Server is coded in C++ using ATL OLE Database.
All database objects are being accessed trough stored procedures.
All UPDATE/INSERT stored procedures wrap their internal operations in a BEGIN TRANS ... COMMIT TRANS block.
I collected some initial traces with SQL Profiler following several articles on the Internet like this one (ignore it is referring to SQL Server 2005 tools, the same principles apply). From the traces it appears to be a deadlock between two UPDATE queries.
We have taken some measures that may have reduced the likelihood of the problem from happening as:
SELECT WITH (NOLOCK). We have changed all the SELECT queries in the stored procedures to use WITH (NOLOCK). We understand the implications of having dirty reads but the data being queried is not that important since we do a lot of automatic refreshes and under normal conditions the UI will have the right values.
READ UNCOMMITTED. We have changed the transaction isolation level on the server code to be READ UNCOMMITED.
Reduced transaction scope. We have reduced the time a transaction is being held in order to minimize the probabilities of a database deadlock to take place.
We are also questioning the fact that we have a transaction inside the majority of the stored procedures (BEGIN TRANS ... COMMIT TRANS block). In this situation my guess is that the transaction isolation level is SERIALIZABLE, right? And what about if we also have a transaction isolation level specified in the source code that calls the stored procedure, which one applies?
This is a processing intensive application and we are hitting the database a lot for reads (bigger percentage) and some writes.
If this were a SQL Server 2005 database I could go with Geoff Dalgas answer on an deadlock issue concerning Stack Overflow, if that is even applicable for the issue I am running into. But upgrading to SQL Server 2005 is not, at the present time, a viable option.
As these initial attempts failed my question is: How would you go from here? What steps would you take to reduce or even avoid the deadlock from happening, or what commands/tools should I use to better expose the problem?
A few comments:
The isolation level explicitly specified in your stored procedure overrides isolatlation level of the caller.
If sp_getapplock is available on 2000, I'd use it:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/06/30/855.aspx
In many cases serializable isolation level increases the chance you get a deadlock.
A good resource for 2000:
http://www.code-magazine.com/article.aspx?quickid=0309101&page=1
Also some of Bart Duncan's advice might be applicable:
http://blogs.msdn.com/bartd/archive/2006/09/09/747119.aspx
In addition to Alex's answer:
Eyeball the code to see if tables are being accessed in the same order. We did this recently and reordered code to alway to parent then child. The system had grown, code and features were more complex, more user: we simply started getting deadlocks.
- See if transactions can be shortened (eg start later, finish earlier, less processing)
Identify which code you'd like not to fail and use SET DEADLOCK PRIORITY LOW in the other
We've used this (SQL 2005 has more options here) to make sure that some code will never be deadlocked and sacrificed other code.
If you have SELECT at the start of the transaction to prepare some stuff, consider HOLDLOCK (maybe UPDLOCK) to keep this locked for the duration. We use this occasionally so stop writes on this table by other processes.
The reason for the deadlocks in my setup scenario was after all the indexes. We were using (generated by default) non clustered indexes for the primary keys of the tables. Changing to clustered indexes fixed the problem.
My guess would be that you are experiencing deadlocks, either:
Because your DML(Updates probably) statements are getting escalations to table-locks, or
Different stored procedures are accessing the same tables in transactions but in a different order.
To address this, I would first examine the stored procedures, and make sure the the modifications statements have the indexes that they need.
Note: this applies to both the target tables and the source tables (despite NOLOCK, an UPDATE's source tables will get locks also. Check the query plans for scans on user stored procedures. Unlike batch or bulk operations, most user queries & DMLs work on a small subsets of the table rows and so should not be locking the entire table.
Then secondly, I would check the stored procedures to ensure that all data access in a stored procedure is being done in a consistent order (Parent -> Child is usually preferred).