When I am executing a stored procedure with some inputs, it took more than 30 seconds to run, but when I directly execute the query with the same inputs, that only took 3 seconds.
I guess it's due to the poor execution plan that stays in the cache, so first I dropped that stored procedure and recreated it, but that did not work. So I added a WITH RECOMPILE option to the stored procedure and tried but that also does not work.
Then I tried to assign the input parameters to local variables and then used those local variables inside the stored procedure (instead of the actual parameters) and after that, the execution time of the stored procedure is reduced to 3 seconds (if it's due to parameter sniffing then WITH RECOMPILE option will also resolve it right!).
And even I restored the same backup in another server and then executed the same stored procedure with the same inputs it took only 3 seconds.
So can someone please clarify the logic behind it?
Related
I have a stored procedure that works fine previously. It took 4 to 5 secs to get the results.
I didn't used this stored procedure for the past two months. When I call the same procedure now it takes more than 5 minutes to produce the result.
(There is no records populated to my source tables in the past two months)
I converted the stored procedure and executed as TSQL block it is back to normal. But when I convert back to stored procedure again it is taking more than 5 minutes.
I am wondering why it is behaving like this. I used 6 table variables. I just populating those table variables to get the desired results by joining all those.
I already tried the below options
With Recompile at the stored procedure level
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
sp_updatestats
but there is no improvement. When I execute it as TSQL it works fine.
Please suggest me any ideas to optimize the stored procedure.
In your queries, add OPTION(OPTIMIZE FOR UNKNOWN) (as the last clause) to prevent parameter sniffing. For syntax and explanation, see the documentation on Query Hints.
What SQL Server does the first time it runs a Stored Procedure is optimize the execution plan(s) for the parameters that were passed to it. This is done in a process that is called Parameter Sniffing.
In general, execution plans are cached by SQL Server so that SQL Server doesn't have to recompile each time for the same query. The next time the procedure is run, SQL Server will re-use the execution plan(s) for the queries in it... However, the execution plan(s) might be totally inefficient if you call it (them) with different parameters.
The option I gave you will tell to the SQL compiler that the execution plan should not be optimized for specific parameters, but rather for any parameter that is passed to the Stored Procedure.
To quote the documentation:
OPTIMIZE FOR UNKNOWN
Instructs the query optimizer to use statistical data instead of the initial values for all local variables when the query is compiled and optimized, including parameters created with forced parameterization.
In some cases Stored Procedures can benefit from Parameter Sniffing, in some cases they don't. For the Stored Procedures that don't benefit from Paramater Sniffing, you can add the option to each query that uses any of the parameters of the Stored Procedure.
You may have bad execution plan associated with that proc.
Try this one
DBCC FREESYSTEMCACHE ('ALL') WITH MARK_IN_USE_FOR_REMOVAL;
You may also find this interesting to read
http://www.sqlpointers.com/2006/11/parameter-sniffing-stored-procedures.html
I have a stored procedure that does some inserts into a table. If I have to execute that same stored procedure repeatedly, each of these executions reflect the Inserts in the table after it ends or it could happen that each insert occurs after the end of the stored procedure execution and overlaps with the execution of the second instance of that stored procedure.
I hope I was clear, if not please correct me
Thanks
Whatever work is done in a Stored Procedure, unless explicitly rolled-back or automatically rolled-back due to an error, will be there when the Stored Procedure exits. Once a Stored Procedure exits, there is no more work that it could be doing.
This means that within a single session, any number of executions of a stored procedure are handled serially -- one after the other, no overlap.
However, across multiple sessions / connections, the work being done in a Stored Procedure certainly can overlap if that same code (Stored Procedure or even ad hoc SQL) is run at the same time across other sessions / connections.
I have a system set up with a batch execution of a stored procedure 10 times.
exec procedure 1
exec procedure 2
exec procedure 3
exec procedure 4
...
exec procedure 10
The procedure is designed to accumulate a running total based on the ID of the record in a target table.
When I run that batch I will get results that are out of sync. The running total should just be the running total of the previous row, plus the value of the current row.
When I run the same batch with a GO statement in between each, it takes much longer to run, but executes correctly.
Is there any kind of hint (like "MAXDOP 1") that can be done in this situation to force the procedures to execute and complete in order without going out of sync?
I should add that the stored procedure being called, calls several procedures itself. If that has any bearing on a solution.
I did a bit more testing on this, and it looks like my initial thoughts were incorrect. I did several tests with batches using GO statements, and even then, only a few records in the batch would have their running balances updated, but the remaining would stay out of sync. It looks like when I did my initial tests, the first 10 records updated properly, but I didn't notice anything else in that section since the rest of the values were already correct until a later section of the data set.
This looks like it is an issue internal to the procedure, not repeated execution of the procedure. The weird part is that we never experienced this issue on a single-core system which is what still leaves to me thinking this is a parallelism issue, but most likely internal to the procedure.
Sorry for wasting your time.
I have two stored stored procedures, where the second stored procedure is an improvement of the first one.
I'm trying to measure exactly how much is that improvement.
1/ Measuring clock time doesn't seem to be an option as I get different execution times. Even worse, sometimes (rarely, but it happens) the execution time of the second stored procedure is bigger than the execution time of the first procedure (I guess due to the server workload at that moment).
2/ Include client statistics also provides different results.
3/ DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE are good, but the same story...
4/ SET STATISTICS IO ON could be an option, but how could I get an overall score as I have many tables involved in my stored procedures?
5/ Include actual execution plan could be an option also. I get an estimated subtreecost of 0.3253 for the first stored procedure, and 0.3079 for the second one. Can I say the second stored procedure is 6% faster (=0.3253/0.3079) ?
6/ Using "Reads" field from SQL Server Profiler? Even in this case I get different results when I execute those stored procedure many times.
So how can I say the second stored procedure is x% faster than the first procedure, no matter the execution conditions (the workload of the server, the server where these stored procedures are executed, etc).
If it is not possible, how can I prove the second stored procedure is more optimized.
I have a report that renders data returned from a stored procedure. Using profiler I can catch the call to the stored procedure from the reporting services.
The report fails stating the report timed out yet I can execute the stored procedure from SSMS and it returns the data back in five to six seconds.
Note, in the example test run only two rows are returned to the report for rendering though within the stored procedure it may have been working over thousands or even millions of records in order to collate the result passed back to reporting services.
I know the stored procedure could be optimised more but I do not understand why SSRS would be timing out when the execution only seems to take a few seconds to execute from SSMS.
Also another issue has surfaced. If I recreate the stored procedure, the report starts to render perfectly fine again. That is fine except after a short period of time, the report starts timing out again.
The return of the time out seems to be related to new data being added into the main table the report is running against. In the example I was testing, just one hundred new records being inserted was enough to screw up the report.
I imagine more correctly its not the report that is the root cause. It is the stored procedure that is causing the time out when executed from SSRS.
Once it is timeing out again, I best fix I have so far is to recreate the stored procedure. This doesn't seem to be an ideal solution.
The problem also only seems to be occuring on our production environment. Our test and development platforms do not seem to be exhibiting the same problem. Though dev and test do not have the same volume of records as production.
The problem, as you described it, seems to come from variations on the execution plan of some parts in your stored procedure. Look at what statistics are kept on the tables used and how adding new rows affect them.
If you're adding a lot of rows at the
end of the range of a column (think
about adding autonumbers, or
timestamps), the histogram for that
column will become outdated rapidly.
You can force an immediate update from
T-SQL by executing the UPDATE
STATISTICS statement.
I have also had this issue where the SPROC takes seconds to run yet SSRS simply times out.
I have found from my own experience that there are a couple of different methods to overcome this issue.
Is parameter sniffing! When your stored procedure is executed from SSRS it will "sniff" out your parameters to see how your SPROC is using them. SQL Server will then produce an execution plan based on its findings. This is good the first time you execute your SPROC, but you don't want it to be doing this every time you run your report. So I declare a new set of variables at the top of my SPROC's which simply store the parameters passed in the query and use these new parameters throughout the query.
Example:
CREATE PROCEDURE [dbo].[usp_REPORT_ITD001]
#StartDate DATETIME,
#EndDate DATETIME,
#ReportTab INT
AS
-- Deter parameter sniffing
DECLARE #snf_StartDate DATETIME = #StartDate
DECLARE #snf_EndDate DATETIME = #EndDate
DECLARE #snf_ReportTab INT = #ReportTab
...this means that when your SPORC is executed by SSRS it is only looking at the first few rows in your query for the passed parameters rather than the whole of your query. Which cuts down execution time considerably in SSRS.
If your SPROC has a lot of temp tables that are declared as variables (DECLARE #MyTable AS TABLE), these are really intensive on the server (In terms of memory) when generating reports. By using hash temp tables (SELECT MyCol1, MyCol2 INTO #MyTable) instead, SQL Server will store your temp tables in TempDB on the server rather than in system memeory, making the report generation less intensive.
sometime adding WITH RECOMPILE option to the CREATE statement of stored procedure helps.
This is effective in situations when the number of records explored by the procedure changes in the way that the original execution plan is not optimal.
Basically all I've done so far was to optimise the sproc a bit more and it seems to at least temporarily solve the problem.
I would still like to know what the difference is between calling the sproc from SSMS and SSRS.