SQL procedure execution order - sql-server

I have a system set up with a batch execution of a stored procedure 10 times.
exec procedure 1
exec procedure 2
exec procedure 3
exec procedure 4
...
exec procedure 10
The procedure is designed to accumulate a running total based on the ID of the record in a target table.
When I run that batch I will get results that are out of sync. The running total should just be the running total of the previous row, plus the value of the current row.
When I run the same batch with a GO statement in between each, it takes much longer to run, but executes correctly.
Is there any kind of hint (like "MAXDOP 1") that can be done in this situation to force the procedures to execute and complete in order without going out of sync?
I should add that the stored procedure being called, calls several procedures itself. If that has any bearing on a solution.

I did a bit more testing on this, and it looks like my initial thoughts were incorrect. I did several tests with batches using GO statements, and even then, only a few records in the batch would have their running balances updated, but the remaining would stay out of sync. It looks like when I did my initial tests, the first 10 records updated properly, but I didn't notice anything else in that section since the rest of the values were already correct until a later section of the data set.
This looks like it is an issue internal to the procedure, not repeated execution of the procedure. The weird part is that we never experienced this issue on a single-core system which is what still leaves to me thinking this is a parallelism issue, but most likely internal to the procedure.
Sorry for wasting your time.

Related

Stored procedure slowness in SQL Server

When I am executing a stored procedure with some inputs, it took more than 30 seconds to run, but when I directly execute the query with the same inputs, that only took 3 seconds.
I guess it's due to the poor execution plan that stays in the cache, so first I dropped that stored procedure and recreated it, but that did not work. So I added a WITH RECOMPILE option to the stored procedure and tried but that also does not work.
Then I tried to assign the input parameters to local variables and then used those local variables inside the stored procedure (instead of the actual parameters) and after that, the execution time of the stored procedure is reduced to 3 seconds (if it's due to parameter sniffing then WITH RECOMPILE option will also resolve it right!).
And even I restored the same backup in another server and then executed the same stored procedure with the same inputs it took only 3 seconds.
So can someone please clarify the logic behind it?

Will TSQL return faster results than stored procedure in SQL Server

I have a stored procedure that works fine previously. It took 4 to 5 secs to get the results.
I didn't used this stored procedure for the past two months. When I call the same procedure now it takes more than 5 minutes to produce the result.
(There is no records populated to my source tables in the past two months)
I converted the stored procedure and executed as TSQL block it is back to normal. But when I convert back to stored procedure again it is taking more than 5 minutes.
I am wondering why it is behaving like this. I used 6 table variables. I just populating those table variables to get the desired results by joining all those.
I already tried the below options
With Recompile at the stored procedure level
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
sp_updatestats
but there is no improvement. When I execute it as TSQL it works fine.
Please suggest me any ideas to optimize the stored procedure.
In your queries, add OPTION(OPTIMIZE FOR UNKNOWN) (as the last clause) to prevent parameter sniffing. For syntax and explanation, see the documentation on Query Hints.
What SQL Server does the first time it runs a Stored Procedure is optimize the execution plan(s) for the parameters that were passed to it. This is done in a process that is called Parameter Sniffing.
In general, execution plans are cached by SQL Server so that SQL Server doesn't have to recompile each time for the same query. The next time the procedure is run, SQL Server will re-use the execution plan(s) for the queries in it... However, the execution plan(s) might be totally inefficient if you call it (them) with different parameters.
The option I gave you will tell to the SQL compiler that the execution plan should not be optimized for specific parameters, but rather for any parameter that is passed to the Stored Procedure.
To quote the documentation:
OPTIMIZE FOR UNKNOWN
Instructs the query optimizer to use statistical data instead of the initial values for all local variables when the query is compiled and optimized, including parameters created with forced parameterization.
In some cases Stored Procedures can benefit from Parameter Sniffing, in some cases they don't. For the Stored Procedures that don't benefit from Paramater Sniffing, you can add the option to each query that uses any of the parameters of the Stored Procedure.
You may have bad execution plan associated with that proc.
Try this one
DBCC FREESYSTEMCACHE ('ALL') WITH MARK_IN_USE_FOR_REMOVAL;
You may also find this interesting to read
http://www.sqlpointers.com/2006/11/parameter-sniffing-stored-procedures.html

Rerouting all stored procedure calls through a single stored procedure for logging purposes

I'm playing with the idea of rerouting every end-user stored procedure call of my database through a logging stored procedure. Essentially it will wrap the stored procedure call in some simple logging logic, who made the call, how long did it take etc.
Can this potentially create a bottleneck? I'm concerned that when the amount of total stored procedure calls grows this could become a serious problem.
Routing everything through a single point of entry is not optimal. Even if there are no performance issues, it is still something of a maintenance problem as you will need to expose the full range of Input Parameters that the real procs are accepting in the controller proc. Adding procs to this controller over time will require a bit of testing each time to make sure that you mapped the parameters correctly. Removing procs over time might leave unused input parameters. This method also requires that the app code pass in the params it needs to for the intended proc, but also the name (or ID?) of the intended proc, and this is another potential source of bugs, even if a minor one.
It would be better to have a general logging proc that gets called as the first thing of each of those procs. That is a standard template that can be added to any new proc quite easily. This leaves a clean API to the app code such that the app code is likewise maintainable.
SQL can run the same stored procedure concurrently, as long as it doesn't cause blocking or deadlocks on the resources it is using. For example:
CREATE PROCEDURE ##test
AS
BEGIN
SELECT 1
WAITFOR DELAY '00:00:10'
SELECT 2
END
Now execute this stored procedure quickly in two different query windows to see it running at the same time:
--Query window 1
EXEC ##test
--Query window 2
EXEC ##test
So you can see there won't be a line of calls waiting to EXECUTE the stored procedure. The only problem you may run into is if you are logging the sproc details to a certain table, depending on the isolation level, you could get blocking as the logging sproc locks pages in the table for recording the data. I don't believe this would be a problem unless you are running the logging stored procedure extremely heavily, but you'd want to run some tests to be sure.

Executing stored procedure takes too long than executing TSQL

I have a stored procedure and when I want to execute it using exec proc_name it takes 1 min
If I copy the code from stored procedure, declare params as variables and then execute the code it takes 10 sec.
What's wrong ?
I am missing something here ?
I am asking this because I use ADO.NET and I get a timeout error when I want to execute that stored procedure using ExecuteNonQuery.
Thank you
Its caused by suboptimal plans being used.
You mention that the s.p. has parameters, I've had similar issues due to 'parameter sniffing'.
The quickest check to see if this is the issue is just to, inside the SP, copy the input parameters into local variables then use only the local variables.
This stops e.g. optimisation for certain paramater values at the expense of others.
I've had this before in an s.p. which had int parameters where certain parameter values changed the control flow (as well as how queries would be executed) a bit.
Start Sql Profiler and compare those two executions: is the extra 50 mins spent on the server? Are the queries really the same?
You can than copy the actual query text and run it manually and check execution plan.
try the executing proc with Execution plan icon switched on.
It will tell you exactly which part takes time and you/we can probably take over (suggestions) from there.
Thanks
As a general idea, query plans are cached differently when we talk about adhoc statements vs stored procedures. So the execution time could be different as chosen query plan could be different.
As suggestions, I think at:
1/ Invalidate the query plan associated with that stored procedure:
sp_recompile <procname>
2/ Delete all query plans from cache (the hard-way, non recommended in PROD unless you understand very well the consequences): :
DBCC FREEPROCCACHE
3/ Update statistics for involved tables.
4/ Have look at actual execution plan for both cases and isolate where is the performance bottleneck. Post some code and we'll provide you more details about.
Option 1 : execute SP in Alter State and try again with parameters.
Option 2 : EXEC sp_updatestats
Option 3 : Failing with option 1, add "option(recompile)" at the end of your query.
Eg : Select Id from Table1 Order BY Id desc option(recompile)
If this runs faster, slow down was due to executions plans made by SQL.

Why does a SSRS report time out when the Stored Procedure it is based on returns results within a few seconds?

I have a report that renders data returned from a stored procedure. Using profiler I can catch the call to the stored procedure from the reporting services.
The report fails stating the report timed out yet I can execute the stored procedure from SSMS and it returns the data back in five to six seconds.
Note, in the example test run only two rows are returned to the report for rendering though within the stored procedure it may have been working over thousands or even millions of records in order to collate the result passed back to reporting services.
I know the stored procedure could be optimised more but I do not understand why SSRS would be timing out when the execution only seems to take a few seconds to execute from SSMS.
Also another issue has surfaced. If I recreate the stored procedure, the report starts to render perfectly fine again. That is fine except after a short period of time, the report starts timing out again.
The return of the time out seems to be related to new data being added into the main table the report is running against. In the example I was testing, just one hundred new records being inserted was enough to screw up the report.
I imagine more correctly its not the report that is the root cause. It is the stored procedure that is causing the time out when executed from SSRS.
Once it is timeing out again, I best fix I have so far is to recreate the stored procedure. This doesn't seem to be an ideal solution.
The problem also only seems to be occuring on our production environment. Our test and development platforms do not seem to be exhibiting the same problem. Though dev and test do not have the same volume of records as production.
The problem, as you described it, seems to come from variations on the execution plan of some parts in your stored procedure. Look at what statistics are kept on the tables used and how adding new rows affect them.
If you're adding a lot of rows at the
end of the range of a column (think
about adding autonumbers, or
timestamps), the histogram for that
column will become outdated rapidly.
You can force an immediate update from
T-SQL by executing the UPDATE
STATISTICS statement.
I have also had this issue where the SPROC takes seconds to run yet SSRS simply times out.
I have found from my own experience that there are a couple of different methods to overcome this issue.
Is parameter sniffing! When your stored procedure is executed from SSRS it will "sniff" out your parameters to see how your SPROC is using them. SQL Server will then produce an execution plan based on its findings. This is good the first time you execute your SPROC, but you don't want it to be doing this every time you run your report. So I declare a new set of variables at the top of my SPROC's which simply store the parameters passed in the query and use these new parameters throughout the query.
Example:
CREATE PROCEDURE [dbo].[usp_REPORT_ITD001]
#StartDate DATETIME,
#EndDate DATETIME,
#ReportTab INT
AS
-- Deter parameter sniffing
DECLARE #snf_StartDate DATETIME = #StartDate
DECLARE #snf_EndDate DATETIME = #EndDate
DECLARE #snf_ReportTab INT = #ReportTab
...this means that when your SPORC is executed by SSRS it is only looking at the first few rows in your query for the passed parameters rather than the whole of your query. Which cuts down execution time considerably in SSRS.
If your SPROC has a lot of temp tables that are declared as variables (DECLARE #MyTable AS TABLE), these are really intensive on the server (In terms of memory) when generating reports. By using hash temp tables (SELECT MyCol1, MyCol2 INTO #MyTable) instead, SQL Server will store your temp tables in TempDB on the server rather than in system memeory, making the report generation less intensive.
sometime adding WITH RECOMPILE option to the CREATE statement of stored procedure helps.
This is effective in situations when the number of records explored by the procedure changes in the way that the original execution plan is not optimal.
Basically all I've done so far was to optimise the sproc a bit more and it seems to at least temporarily solve the problem.
I would still like to know what the difference is between calling the sproc from SSMS and SSRS.

Resources