I have this procedure
create procedure TEST_MT
AS
begin
set parallel_degree 1
select * from ttt WHERE CODE =99
END
how can I know what is the parallele degree in the session ?
Execute the procedure after setting SET SHOWPLAN ON, and the query plan is displayed with the actual parallel degree. Note that this may potentially vary between executions since the pll level can depend on available resources.
You can check the value of ##parallel_degree
Related
currenty i am working on a report system for our data archive.
the aim is to select data for every 1st of a month, every full hour and so on.
So I have a bunch of parameters to select the data down to a single hour.
To achieve that I used CASE statements to adjust the select like this:
SELECT
MIN(cd.Timestamp) as Mintime,
--Hours
CASE
WHEN
#SelHour IS NOT NULL
THEN
DATEPART(HOUR, cd.Timestamp)
END as Hour,
... -- more CASES up to DATEPART(YEAR, cd.Timestamp)
FROM dbo.CustomerData cd
... -- filter data and other stuff
This statements works good for me so far, but I am a bit worried about the performance of the stored procedure. Because I don't know how the server will behave with this "changing" statement. The result can vary between a 20 row result up to a 250.000 rows and more. Depending on the given parameters. As far as I know the sql server saves the query plan and reuses it for future execution.
When it saves the plan for the 20 row result the performance for the 250.000 result is propably pretty poor.
Now I am wondering whats the better aproach. Using this stored procedure or create the statement inside my c# backend and pass the "adjusted" statement to the sql server?
Thanks and greetings
For 20 rows result set it will work good anywhere. But for returning 250k records to c# code seems change in design for this code since loading 250k records in memory & looping will also consume significant memory and such concurrent requests from different session/user will multiply load exponentially.
Anyway to address problem with SQL Server reusing same query plan, you can recompile query plans selectively or every time. These are options available for Recompile execution plan:
OPTION(RECOMPILE)
SELECT
MIN(cd.Timestamp) as Mintime,
--Hours
CASE
WHEN
#SelHour IS NOT NULL
THEN
DATEPART(HOUR, cd.Timestamp)
END as Hour,
... -- more CASES up to DATEPART(YEAR, cd.Timestamp)
FROM dbo.CustomerData cd
... -- filter data and other stuff
OPTION(RECOMPILE)
WITH RECOMPILE Option this will recompile execution plan every time
CREATE PROCEDURE dbo.uspStoredPrcName
#ParamName varchar(30) = 'abc'
WITH RECOMPILE
AS
...
RECOMPILE Query Hint providing WITH RECOMPILE in execute
NOTE: this will require CREATE PROCEDURE permission in the database and ALTER permission on the schema in which the procedure is being created.
EXECUTE uspStoredPrcName WITH RECOMPILE;
GO
sp_recompile System Stored Procedure
NOTE: Requires ALTER permission on the specified procedure.
EXEC sp_recompile N'dbo.uspStoredPrcName ';
GO
For more details on Recompile refer Microsoft Docs:
https://learn.microsoft.com/en-us/sql/relational-databases/stored-procedures/recompile-a-stored-procedure?view=sql-server-ver15
I have 2 stored procedures where the first one calls the second one within a transaction. The second procedure should never be called directly, but only from within its parent.
Currently, to check if this is the case I'm doing the following in the second procedure:
DECLARE #inTran bit;
IF ##TRANCOUNT > 0
SET #inTran= 0
ELSE
SET #inTran= 1
Is this correct? Is there a better way to do this?
If you are just looking for a casual way to prevent inadvertent execution of the proc on its own. You could also check ##NESTLEVEL - this will be at least 2 if called from another proc.
CREATE PROCEDURE Child
AS
IF ##NESTLEVEL < 2 OR ##TRANCOUNT = 0
THROW 50000, 'Child proc should be called from Parent', 1;
Or you could have the parent proc set a value read by SESSION_CONTEXT() in the child proc.
None of these will prevent the proc not being run as intended by someone determined to circumvent the restrictions though. They will just guard against accidental misuse.
There is no reliable way of doing this. Checking ##trancount only gives you information if you are in a transaction or not, and someone could do this:
BEGIN TRAN
EXECUTE nested_proc_directly
In this case, the tran count in the proc would b greater than 0. And as others have said you cannot the call stack. So, sorry.
I'm reading between the lines here a little bit, and guessing that the actual question is how to prevent Procedure2 from being run by any process except a call from Procedure1.
If this has to be as close to totally locked down as possible, create a dedicated service account to run these procedures, or their associated job(s), and then only grant EXECUTE permissions on Procedure2 to that dedicated account.
If "pretty locked down" is good enough, only grant EXECUTE permissions on Procedure2 to the service account you have running your jobs in production. At least that would keep stray users from firing it off willy-nilly.
Another thought would be to create an SSIS package with two Execute SQL Tasks in it, with the first containing all the code in Procedure1 and the second containing all the code in Procedure2, then do away with the procs and run the package instead. I don't care for embedding code in packages, though, because maintenance is irritating.
You can use ##PROCID, for this. The only problem is that you need to pass the parameter by input.
CREATE PROCEDURE usp_Test1(#id As int)
AS
PRINT #id
PRINT OBJECT_NAME(#id)
GO
CREATE PROCEDURE usp_Test2
AS
EXEC usp_Test1 ##PROCID
GO
EXEC usp_Test2
GO
output
1054730910
usp_Test2
I need to use the same query twice but have a slightly different where clause. I was wondering if it would be efficient to simply call the same stored proc with a bit value, and have an IF... ELSE... statement, deciding on which fields to compare.
Or should I make two stored procs and call each one based on logic in my app?
I'd like to know this more in detail though to understand properly.
How is the execution plan compiled for this? Is there one for each code block in each IF... ELSE...?
Or is it compiled as one big execution plan?
You are right to be concerned about the execution plan being cached.
Martin gives a good example showing that the plan is cached and will be optimized for a certain branch of your logic the first time it is executed.
After the first execution that plan is reused even if you call the stored procedure (sproc) with a different parameter causing your executing flow to choose another branch.
This is very bad and will kill performance. I've seen this happen many times and it takes a while to find the root cause.
The reason behind this is called "Parameter Sniffing" and it is well worth researching.
A common proposed solution (one that I don't advice) is to split up your sproc into a few tiny ones.
If you call a sproc inside a sproc that inner sproc will get an execution plan optimized for the parameter being passed to it.
Splitting up a sproc into a few smaller ones when there is no good reason (a good reason would be modularity) is an ugly workaround. Martin shows that it's possible for a statement to be recompiled by introducing a change to the schema.
I would use OPTION (RECOMPILE) at the end of the statement. This instructs the optimizer to do a statement recompilation taking into account the current value of all variables: not only parameters but local variables are also taken into account which can makes the difference between a good and a bad plan.
To come back to your question of constructing a query with a different where clause according to a parameter. I would use the following pattern:
WHERE
(#parameter1 is null or col1 = #parameter1 )
AND
(#parameter2 is null or col2 = #parameter2 )
...
OPTION (RECOMPILE)
The down side is that the execution plan for this statement is never cached (it doesn't influence caching up to the point of the statement though) which can have an impact if the sproc is executed many time as the compilation time should now be taken into account. Performing a test with production quality data will give you the answer if it's a problem or not.
The upside is that you can code readable and elegant sprocs and not set the optimizer on the wrong foot.
Another option to keep in mind is that you can disable execution plan caching at the sproc level (as opposed to the statement level) level which is less granular and, more importantly, will not take into account the value of local variables when optimizing.
More information at
http://www.sommarskog.se/dyn-search-2005.html
http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/
It is compiled once using the initial value of the parameters passed into the procedure. Though some statements may be subject to deferred compile in which case they will be compiled with whatever the parameter values are when eventually compiled.
You can see this from running the below and looking at the actual execution plans.
CREATE TABLE T
(
C INT
)
INSERT INTO T
SELECT 1 AS C
UNION ALL
SELECT TOP (1000) 2
FROM master..spt_values
UNION ALL
SELECT TOP (1000) 3
FROM master..spt_values
GO
CREATE PROC P #C INT
AS
IF #C = 1
BEGIN
SELECT '1'
FROM T
WHERE C = #C
END
ELSE IF #C = 2
BEGIN
SELECT '2'
FROM T
WHERE C = #C
END
ELSE IF #C = 3
BEGIN
CREATE TABLE #T
(
X INT
)
INSERT INTO #T
VALUES (1)
SELECT '3'
FROM T,
#T
WHERE C = #C
END
GO
EXEC P 1
EXEC P 2
EXEC P 3
DROP PROC P
DROP TABLE T
Running the 2 case shows an estimated number of rows coming from T as 1 not 1000 because that statement was compiled according to the initial parameter value passed in of 1. Running the 3 case gives an accurate estimated count of 1000 as the reference to the (yet to be created) temporary table means that statement was subject to a deferred compile.
What is the best way to accurately measure the performance (time to complete) of a stored procedure?
I’m about to start an attempt to optimize a monster stored procedure, and in order to correctly determine if my tweaks have any effect, I need something to compare the before and after.
My ideas so far:
Looking a query execution time SQL Management Studio: Not very accurate, but very convenient.
Adding timers in the stored procedure and printing the elapsed time: Adding debug code like that stinks.
Using the SQL Server Profiler, adding filters to target just my stored procedure. This is my best option so far.
Any other options?
There's lots of detailed performance information in the DMV dm_exec_query_stats
DECLARE #procname VARCHAR(255)
SET #procname = 'your proc name'
SELECT * FROM sys.dm_exec_query_stats WHERE st.objectid = OBJECT_ID(#procname)
This will give you cumulative performance data and execution counts per cached statement.
You can use DBCC FREEPROCCACHE to reset the counters (don't run this in a production system, since will purge all the cached query plans).
You can get the query plans for each statement by extending this query:
SELECT SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
((CASE statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1) [sub_statement]
,*, CONVERT(XML, tqp.query_plan)
FROM sys.dm_exec_query_stats qs CROSS APPLY
sys.dm_exec_sql_text(sql_handle) st CROSS APPLY
sys.dm_exec_query_plan(plan_handle) qp CROSS APPLY
sys.dm_exec_text_query_plan(plan_handle, statement_start_offset, statement_end_offset ) tqp
WHERE st.objectid = OBJECT_ID(#procname)
ORDER BY statement_start_offset, execution_count
This will give you pointers about which parts of the SP are performing badly, and - if you include the execution plans - why.
Profiler is the most reliable method. You can also use SET STATISTICS IO ON and SET STATISTICS TIME ON but these don't include the full impact of scalar UDFs.
You can also turn on the "include client statistics" option in SSMS to get an overview of the performance of the last 10 runs.
One possible improvement on your timers/debug option is to store the results in a table. In this way you can slice-and-dice the resulting timing data with SQL queries rather than just visually parsing your debug output.
You want to ensure that you are performing fair tests i.e. comparing like with like. Consider running your tests using a cold cache in order to force your stored procedure execution to be served from the IO Subsystem each time you perform your tests.
Take a look at the system stored procedures DBCC FREEPROCCACHE and DBCC FREESYSTEMCACHE
I have a stored procedure for SQL Server 2000 that can only have a single instance being executed at any given moment. Is there any way to check and ensure that the procedure is not currently in execution?
Ideally, I'd like the code to be self contained and efficient (fast). I also don't want to do something like creating a global temp table checking for it's existence because if the procedure fails for some reason, it will always be considered as running...
I've searched, I don't think this has been asked yet. If it has been, sorry.
yes there is a way. use what is known as SQL Server Application locks.
EDIT: yes this also works in SQL Server 2000.
You can use sp_getapplock sp_releaseapplock as in the example found at Lock a Stored Procedure for Single Use Only.
But, is that what you are really trying to do? Are you trying to get a transaction with a high isolation level? You would also likely be much better off handling that type of concurrency at the application level as in general higher level languages have much better primitives for that sort of thing.
how about locking a dummy table? That wouldn't cause deadlocks in case of failures.
One of the initial external links shared in the replies had helpful info but personally I prefer for standalone answers/snippets to be right here on the Stack Overflow question page. See below snippet for what I used and solved my (similar) problem. If anyone has problems (or adjustment suggestions) please chime in.
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[MyLockedAndDelayedStoredProcedure]') AND type in (N'P', N'PC'))
DROP PROCEDURE [GetSessionParticipantAnswersFromEmailAddressAndSessionName]
GO
CREATE PROCEDURE [MyLockedAndDelayedStoredProcedure]
#param1 nvarchar(max) = ''
AS
BEGIN
DECLARE #LockedTransactionReturnCode INT
PRINT 'MyLockedAndDelayedStoredProcedure CALLED at ' + CONVERT(VARCHAR(12),GETDATE(),114);
BEGIN TRANSACTION
EXEC #LockedTransactionReturnCode =sp_getapplock #Resource='MyLockedAndDelayedStoredProcedure_LOCK', #LockMode='Exclusive', #LockOwner='Transaction', #LockTimeout = 10000
PRINT 'MyLockedAndDelayedStoredProcedure STARTED at ' + CONVERT(VARCHAR(12),GETDATE(),114);
-- Do your Stored Procedure Stuff here
Select #param1;
-- If you don't want/need a delay remove this line
WAITFOR DELAY '00:00:3'; -- 3 second delay
PRINT 'MyLockedAndDelayedStoredProcedure ENDED at ' + CONVERT(VARCHAR(12),GETDATE(),114);
COMMIT
END
-- https://gist.github.com/cemerson/366358cafc60bc1676f8345fe3626a3f
At the start of the procedure check if piece of data is 'locked' if not lock it
At end of procedure unlock the piece of data.
ie
SELECT #IsLocked=IsLocked FROM CheckLockedTable Where spName = 'this_storedProcedure'
IF #IsLocked = 1
RETURN
ELSE
UPDATE CheckLockedTable SET IsLocked = 1 Where spName = 'this_storedProcedure'
.
.
.
-- At end of Stored Procedure
UPDATE CheckLockedTable SET IsLocked = 0 Where spName = 'this_storedProcedure'