I've got a problem with a terribly performing stored procedure. The odd part is that if I run the procedure it takes hours. If I run the contents of the procedure as a batch in ssms, it runs in a reasonable amount of time. I have narrowed the problem to a single statement within the proc.
My first thought was a bad query plan cache. However adding WITH RECOMPILE to either the proc, or OPTION(RECOMPILE) to the offending statement within the proc made no difference.
So I captured the (actual) execution plan from both exec-ing the procedure and running the statements directly and found this difference:
The slow stored procedure version has a <Merge ManyToMany="True"> element in the xml whereas the plain sql version has a <Hash> element.
I don't think I know enough about execution plans to determine why it would choose one or another.
Both versions were run on the same data -- etc:
BEGIN TRANSACTION;
exec myproc; --capture plan
ROLLBACK TRANSACTION;
BEGIN TRANSACTION
SQL Statements from procedure -- capture 2nd plan
ROLLBACK TRANSACTION
What sorts of things can influence the plan within a procedure that would be different when executing directly from ssms? Does anyone have any suggestions on how to narrow this down further?
I don't know how much help the particular query is here, but it's a MERGE statement:
MERGE schema.UpdatableView FORUPDATE
USING
(
large select statement that's not part of the problem
) DATA
ON DATA.field = FORUPDATE
WHEN MATCHED THEN -- 50% of the cost is here
UPDATE SET
LOTS of field updates
WHEN NOT MATCHED THEN -- other 50% is here
INSERT (FIELDS)
VALUES (FIELDS
OPTION (RECOMPILE)
;
The updatable views may be part of the problem, but SQL Profiler doesn't seem to think so. The underlying INSERT and UPDATE triggers on the view aren't begun until after the statement has been running for a few hours, and they complete in a reasonable amount of time.
This is usually due to different runtime settings like ANSI_NULLS and QUOTED_IDENTIFIER. I suggest you recreate the stored procedure and the views in the same SSMS tab (same session) that you use to test the query. This will make sure that both use the same settings. I think you will notice that both use the same plan.
Related
I have a stored procedure that works fine previously. It took 4 to 5 secs to get the results.
I didn't used this stored procedure for the past two months. When I call the same procedure now it takes more than 5 minutes to produce the result.
(There is no records populated to my source tables in the past two months)
I converted the stored procedure and executed as TSQL block it is back to normal. But when I convert back to stored procedure again it is taking more than 5 minutes.
I am wondering why it is behaving like this. I used 6 table variables. I just populating those table variables to get the desired results by joining all those.
I already tried the below options
With Recompile at the stored procedure level
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
sp_updatestats
but there is no improvement. When I execute it as TSQL it works fine.
Please suggest me any ideas to optimize the stored procedure.
In your queries, add OPTION(OPTIMIZE FOR UNKNOWN) (as the last clause) to prevent parameter sniffing. For syntax and explanation, see the documentation on Query Hints.
What SQL Server does the first time it runs a Stored Procedure is optimize the execution plan(s) for the parameters that were passed to it. This is done in a process that is called Parameter Sniffing.
In general, execution plans are cached by SQL Server so that SQL Server doesn't have to recompile each time for the same query. The next time the procedure is run, SQL Server will re-use the execution plan(s) for the queries in it... However, the execution plan(s) might be totally inefficient if you call it (them) with different parameters.
The option I gave you will tell to the SQL compiler that the execution plan should not be optimized for specific parameters, but rather for any parameter that is passed to the Stored Procedure.
To quote the documentation:
OPTIMIZE FOR UNKNOWN
Instructs the query optimizer to use statistical data instead of the initial values for all local variables when the query is compiled and optimized, including parameters created with forced parameterization.
In some cases Stored Procedures can benefit from Parameter Sniffing, in some cases they don't. For the Stored Procedures that don't benefit from Paramater Sniffing, you can add the option to each query that uses any of the parameters of the Stored Procedure.
You may have bad execution plan associated with that proc.
Try this one
DBCC FREESYSTEMCACHE ('ALL') WITH MARK_IN_USE_FOR_REMOVAL;
You may also find this interesting to read
http://www.sqlpointers.com/2006/11/parameter-sniffing-stored-procedures.html
I have a stored procedure and when I want to execute it using exec proc_name it takes 1 min
If I copy the code from stored procedure, declare params as variables and then execute the code it takes 10 sec.
What's wrong ?
I am missing something here ?
I am asking this because I use ADO.NET and I get a timeout error when I want to execute that stored procedure using ExecuteNonQuery.
Thank you
Its caused by suboptimal plans being used.
You mention that the s.p. has parameters, I've had similar issues due to 'parameter sniffing'.
The quickest check to see if this is the issue is just to, inside the SP, copy the input parameters into local variables then use only the local variables.
This stops e.g. optimisation for certain paramater values at the expense of others.
I've had this before in an s.p. which had int parameters where certain parameter values changed the control flow (as well as how queries would be executed) a bit.
Start Sql Profiler and compare those two executions: is the extra 50 mins spent on the server? Are the queries really the same?
You can than copy the actual query text and run it manually and check execution plan.
try the executing proc with Execution plan icon switched on.
It will tell you exactly which part takes time and you/we can probably take over (suggestions) from there.
Thanks
As a general idea, query plans are cached differently when we talk about adhoc statements vs stored procedures. So the execution time could be different as chosen query plan could be different.
As suggestions, I think at:
1/ Invalidate the query plan associated with that stored procedure:
sp_recompile <procname>
2/ Delete all query plans from cache (the hard-way, non recommended in PROD unless you understand very well the consequences): :
DBCC FREEPROCCACHE
3/ Update statistics for involved tables.
4/ Have look at actual execution plan for both cases and isolate where is the performance bottleneck. Post some code and we'll provide you more details about.
Option 1 : execute SP in Alter State and try again with parameters.
Option 2 : EXEC sp_updatestats
Option 3 : Failing with option 1, add "option(recompile)" at the end of your query.
Eg : Select Id from Table1 Order BY Id desc option(recompile)
If this runs faster, slow down was due to executions plans made by SQL.
If I have Temp Tables being created in a stored procedure's definition and then dropping them when I am done with them will it result in recompilation of execution plan?
For stored procedures every time its called? Any personal Experience?
Any explanation please?
As when the temp tables are dropped at the end of every call, the execution plan becomes invalid. Does SQL Server still keep hold of the execution plan and reuse on next call or does it recompile it every time its called.
Dropping of a temporary table doesn't matter.
If a table is created (either permanent or temporary), all statement after that statement are recompiled (even if they don’t refer to the table). Calls to executable objects using EXEC aren’t recompiled. That's because SQL Server can create the plan after the objects are created. (In this case, the temp. table.)
You can monitor recompilation using Extended Events and its sql_statement_recompile or SQL Trace / SQL Server Profiler SQL:StmtRecompile.
A statement starts to execute. SP:StmtStarting or SQL:StmtStarting is raised
The statement is recompiled. SQL:StmtRecompile is raised. SP:StmtStarting or SQL:StmtStarting is raised again
The statement is finished. SP:StmtCompleted or SQL:StmtCompleted is raised
Not the whole procedure is recompiled but only individual statements.
Generally speaking, any DDL in your store procedure will result in recompilation, then if you use create and drop table instructions you are going to get recompilations.
It can be mitigated by including the DDL as the first statement in the store procedure but you should test before and see it with your own eyes in your server.
If the dataset you have to put in the temporal table is small and you don't need non-unique indexes, you should try to use table variables instead.
It's not a good idea to put too many rows in a table variable because they dont have statistics, Sql Server allways "thinks" they have only one record and the query plan could be a little bit far than optimal one(but it is going to avoid the recompilations due to temporal table creation).
The temp tables can cause recompilation. It happens just because they are treated like the regular tables by the SQL Server Engine. When the tables (in which underlying queries rely on)
change significantly, SQL Server detects this change(using auto update statistics) and marks the dependent queries to be recompiled so that the next execution can create an optimal execution plan.
Once the temp table or the queries relying on the temp table changes the Query engine will not be able to execute the same cached plan as it would not accommodate the query.
It should be noted that table variables inherently do not cause recompilation. In some situations these may be a better choice.
See http://sqlserverplanet.com/optimization/temp-table-recompiles for further information on temp table recompilation.
I have a stored procedure used for a DI report that contains 62 sub-queries using UNION ALL. Recently, performance went from under 1 minute to over 8 minutes and using SQL Profiler, it was showing very high CPU and Reads. The procedure currently has passed in variables set to local variables to prevent parameter sniffing.
Running the contents of the procedure as a SELECT statement and performance was back to under a minute.
Calling the procedure via EXEC in Management Studio and performance was horrible and over 8 minutes.
Calling procedure via EXEC including WITH RECOMPILE command and performance did not improve. I ran DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS and still no improvement.
In the end, I dropped the procedure and re-applied it and performance is now back.
Can anyone help explain to me why the initial steps did not correct the performance of the procedure but dropping and re-applying the procedure did?
Sounds like blocking parameter sniffing produced a bad plan. When you use local variables the query optimizer uses the density for each column to come up with cardinality estimates, essentially optimizing for the average value. If you data distribution is skewed significantly, this estimate will be significantly off for some values. This theory explains why your initial steps did not work. Using WITH RECOMPILE or running DBCC FREEPROCCACHE will not help if parameter sniffing is blocked. It will just produce the same plan every time. Because you say that running the contents of the procedure as a SELECT statement made it faster makes me think you actually need parameter sniffing. However you also need to try using WITH RECOMPILE if compilation time is acceptable, otherwise there's a risk of getting stuck with a bad plan based on atipical sniffed values.
I have a report that renders data returned from a stored procedure. Using profiler I can catch the call to the stored procedure from the reporting services.
The report fails stating the report timed out yet I can execute the stored procedure from SSMS and it returns the data back in five to six seconds.
Note, in the example test run only two rows are returned to the report for rendering though within the stored procedure it may have been working over thousands or even millions of records in order to collate the result passed back to reporting services.
I know the stored procedure could be optimised more but I do not understand why SSRS would be timing out when the execution only seems to take a few seconds to execute from SSMS.
Also another issue has surfaced. If I recreate the stored procedure, the report starts to render perfectly fine again. That is fine except after a short period of time, the report starts timing out again.
The return of the time out seems to be related to new data being added into the main table the report is running against. In the example I was testing, just one hundred new records being inserted was enough to screw up the report.
I imagine more correctly its not the report that is the root cause. It is the stored procedure that is causing the time out when executed from SSRS.
Once it is timeing out again, I best fix I have so far is to recreate the stored procedure. This doesn't seem to be an ideal solution.
The problem also only seems to be occuring on our production environment. Our test and development platforms do not seem to be exhibiting the same problem. Though dev and test do not have the same volume of records as production.
The problem, as you described it, seems to come from variations on the execution plan of some parts in your stored procedure. Look at what statistics are kept on the tables used and how adding new rows affect them.
If you're adding a lot of rows at the
end of the range of a column (think
about adding autonumbers, or
timestamps), the histogram for that
column will become outdated rapidly.
You can force an immediate update from
T-SQL by executing the UPDATE
STATISTICS statement.
I have also had this issue where the SPROC takes seconds to run yet SSRS simply times out.
I have found from my own experience that there are a couple of different methods to overcome this issue.
Is parameter sniffing! When your stored procedure is executed from SSRS it will "sniff" out your parameters to see how your SPROC is using them. SQL Server will then produce an execution plan based on its findings. This is good the first time you execute your SPROC, but you don't want it to be doing this every time you run your report. So I declare a new set of variables at the top of my SPROC's which simply store the parameters passed in the query and use these new parameters throughout the query.
Example:
CREATE PROCEDURE [dbo].[usp_REPORT_ITD001]
#StartDate DATETIME,
#EndDate DATETIME,
#ReportTab INT
AS
-- Deter parameter sniffing
DECLARE #snf_StartDate DATETIME = #StartDate
DECLARE #snf_EndDate DATETIME = #EndDate
DECLARE #snf_ReportTab INT = #ReportTab
...this means that when your SPORC is executed by SSRS it is only looking at the first few rows in your query for the passed parameters rather than the whole of your query. Which cuts down execution time considerably in SSRS.
If your SPROC has a lot of temp tables that are declared as variables (DECLARE #MyTable AS TABLE), these are really intensive on the server (In terms of memory) when generating reports. By using hash temp tables (SELECT MyCol1, MyCol2 INTO #MyTable) instead, SQL Server will store your temp tables in TempDB on the server rather than in system memeory, making the report generation less intensive.
sometime adding WITH RECOMPILE option to the CREATE statement of stored procedure helps.
This is effective in situations when the number of records explored by the procedure changes in the way that the original execution plan is not optimal.
Basically all I've done so far was to optimise the sproc a bit more and it seems to at least temporarily solve the problem.
I would still like to know what the difference is between calling the sproc from SSMS and SSRS.