Dynamic SQL and stored procedure optimization - sql-server

I've read that using Dynamic SQL in a stored procedure can hurt performance of your stored procedures. I guess the theory is that the store procedure won't store an execution plan for SQL executed via EXEC or sp_executesql.
I want to know if this is true. If it is true, do I have the same problem with multiple nested IF blocks, each one with a different "version" of my SQL statement?

If you have multiple nested IF blocks then SQL Server will be able to store execution plans.
I'm assuming that the IFs are straightforward, eg. IF #Parameter1 IS NOT NULL
SchmitzIT's answer is correct that SQL Server can also store execution paths for Dynamic SQL. However this is only true if the sql is properly built and executed.
By properly built, I mean explicitly declaring the parameters and passing them to sp_executesql. For example
declare #Param1 nvarchar(255) = 'foo'
,#Param2 nvarchar(255) = 'bar'
,#sqlcommand nvarchar(max)
,#paramList nvarchar(max)
set #paramList = '#Param1 nvarchar(255), #Param2 nvarchar(255)'
set #sqlcommand = N'Select Something from Table where Field1 = #Param1 AND Field2 = #Param2'
exec sp_executesql #statement = #sqlcommand
,#params = #paramList
,#Param1 = #Param1
,#Param2 = #Param2
As you can see the sqlcommand text does not hardcode the paramer values to use. They are passed separately in the exec sp_executesql
If you write bad old dynamic sqL
set #sqlcommand = N'Select Something from Table where Field1 = ' + #Param1 + ' AND Field2 = ' + #Param2
exec sp_executesql #sqlcommand
then SQL Server won't be able to store execution plans

This is what MSDN has to say about it. I highlighted the relevant bits to your question
sp_executesql has the same behavior as EXECUTE with regard to batches,
the scope of names, and database context. The Transact-SQL statement
or batch in the sp_executesql #stmt parameter is not compiled until
the sp_executesql statement is executed. The contents of #stmt are
then compiled and executed as an execution plan separate from the
execution plan of the batch that called sp_executesql. The
sp_executesql batch cannot reference variables declared in the batch
that calls sp_executesql. Local cursors or variables in the
sp_executesql batch are not visible to the batch that calls
sp_executesql. Changes in database context last only to the end of the
sp_executesql statement.
sp_executesql can be used instead of stored procedures to execute a
Transact-SQL statement many times when the change in parameter values
to the statement is the only variation. Because the Transact-SQL
statement itself remains constant and only the parameter values
change, the SQL Server query optimizer is likely to reuse the
execution plan it generates for the first execution.
http://msdn.microsoft.com/en-us/library/ms188001.aspx

Related

Performance differences calling sp_executesql with dynamic SQL vs parameters

Given:
CREATE PROCEDURE [dbo].[my_storedproc]
#param1 int, #param2 varchar(100)
AS
<<whatever>>
GO
Are there known performance differences between these different execution methods?:
-- Method #1:
declare #param1 int = 1
declare #param2 varchar(100) = 'hello'
exec my_storedproc #param1, #param2
-- Method #2:
exec my_storedproc #param1=1, #param2='hello'
-- Method #3:
declare #param1 int = 1
declare #param2 varchar(100) = 'hello'
declare #procname nvarchar(100) = N'my_storedproc #param1, #param2'
declare #params nvarchar(4000) = N'#param1 int, #param2 varchar(100)'
exec sp_executesql #procname, #params, #param1, #param2
-- Method #4:
declare #procname nvarchar(4000) = N'my_storedproc #param1=1, #param2=''hello'''
exec sp_executesql #procname
-- Method #5:
declare #procname nvarchar(4000) = N'my_storedproc 1, ''hello'''
exec sp_executesql #procname
-- Method #6:
declare #procname nvarchar(4000) = N'my_storedproc 1, ''hello'''
exec (#procname)
"Why do you ask?" you ask? I am trying to find a way to generically execute stored procedures entirely based upon metadata, the controlling stored procedure that will physically execute all the other configured (in metadata) stored procedures knows nothing about them other than what is defined in the metadata. Within this controller SP, I cannot (in any practical sense) know and declare the specific physical parameters (with their required data types) required for every possible stored proc that might have to be called - I am trying to find a way to execute them entirely generically, while still hopefully maintaining decent performance (reusing query plans, etc).
There really shouldn't be a performance difference between the 6 options since they are all executing the stored procedure and not any SQL statements directly.
However, there is no better indication of performance than testing this on your own system. You already have the 6 test cases so it shouldn't be hard to try each one.
Within this controller SP, I cannot (in any practical sense) know and declare the specific physical parameters (with their required data types) required for every possible stored proc that might have to be called
Why not? I don't see why you couldn't dynamically generate the SQL for Methods 2 and 3 based on the output of either of the following queries:
SELECT OBJECT_NAME(sp.[object_id]), *
FROM sys.parameters sp
WHERE sp.[object_id] = OBJECT_ID(N'dbo.my_storedproc');
SELECT isp.*
FROM INFORMATION_SCHEMA.PARAMETERS isp
WHERE isp.[SPECIFIC_NAME] = N'my_storedproc'
AND isp.[SPECIFIC_SCHEMA] = N'dbo';
And with that info, you could create a table to contain the various parameter values for each parameter for each proc. In fact, you could even set it up to have some parameters with "global" values for all variations and then some parameter values are variations for a particular proc.

SQL Server Stored Procedure Parameter

I am trying to create a stored procedure with one parameter. I want the stored procedure to perform an update query and the parameter that I pass when it executes is the table that should be updated. I have been unsuccessful with creating the procedure with the parameter.
CREATE PROCEDURE cleanq7 #tablename varchar(100)
AS
BEGIN
UPDATE #tablename
SET IMPOSSIBLE_CASE = '1'
WHERE q7='1'
GO
The message I receive when I run this is:
Msg 102, Level 15, State 1, Procedure cleanq7, Line 6
Incorrect syntax near '1'.
I tried just the indented update query on a table in test database and it functioned as expected, so I imagine this is an issue with my syntax for declaring the stored procedure.
Any help would be greatly appreciated!
CREATE PROCEDURE cleanq7
#tablename NVARCHAR(128)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #Sql NVARCHAR(MAX);
SET #Sql = N'UPDATE ' + QUOTENAME(#tablename) +
N' SET IMPOSSIBLE_CASE = ''1''
WHERE q7 = ''1'''
EXECUTE sp_executesql #Sql
END
GO
Since you are passing the table name you will need to build your UPDATE statement dynamically and then Execute it using system stored procedure sp_executesql.
When you pass the table name as a String Sql Server treats it as a string not as an Object name. Using QUOTENAME() function puts square brackets [] around the passed table name and then sql server treats it as an object name.
QuoteName function also protects you against Sql injection attack.

Stored procedure to insert values into dynamic table

I was wondering if I can make a stored procedure that insert values into dynamic table.
I tried
create procedure asd
(#table varchar(10), #id int)
as
begin
insert into #table values (#id)
end
also defining #table as table var
Thanks for your help!
This might work for you.
CREATE PROCEDURE asd
(#table nvarchar(10), #id int)
AS
BEGIN
DECLARE #sql nvarchar(max)
SET #sql = 'INSERT INTO ' + #table + ' (id) VALUES (' + CAST(#id AS nvarchar(max)) + ')'
EXEC sp_executesql #sql
END
See more here: http://msdn.microsoft.com/de-de/library/ms188001.aspx
Yes, to implement this directly, you need dynamic SQL, as others have suggested. However, I would also agree with the comment by #Tomalak that attempts at universality of this kind might result in less secure or less efficient (or both) code.
If you feel that you must have this level of dynamicity, you could try the following approach, which, although requiring more effort than plain dynamic SQL, is almost the same as the latter but without the just mentioned drawbacks.
The idea is first to create all the necessary insert procedures, one for every table in which you want to insert this many values of this kind (i.e., as per your example, exactly one int value). It is crucial to name those procedures uniformly, for instance using this template: TablenameInsert where Tablename is the target table's name.
Next, create this universal insert procedure of yours as follows:
CREATE PROCEDURE InsertIntValue (
#TableName sysname,
#Value int
)
AS
BEGIN
DECLARE #SPName sysname;
SET #SPName = #TableName + 'Insert';
EXECUTE #SPName #Value;
END;
As can be seen from the manual, when invoking a module with the EXECUTE command, you can specify a variable instead of the actual module name. The variable in this case should be of a string type and is supposed to contain the name of the module to execute. This is not dynamic SQL, because the syntax is not the same. (For this to be dynamic SQL, the variable would need to be enclosed in brackets.) Instead, this is essentially parametrising of the module name, probably the only kind of natively supported name parametrisation in (Transact-)SQL.
Like I said, this requires more effort than dynamic SQL, because you still have to create all the many stored procedures that this universal SP should be able to invoke. Nevertheless, as a result, you get code that is both secure (the #SPName variable is viewed by the server only as a name, not as an arbitrary snippet of SQL) and efficient (the actual stored procedure being invoked already exists, i.e. it is already compiled and has a query plan).
You'll need to use Dynamic SQL.
To create Dynamic SQL, you need to build up the query as a string. Using IF statements and other logic to add your variables, etc.
Declare a text variable and use this to concatenate together your desired SQL.
You can then execute this code using the EXEC command
Example:
DECLARE #SQL VARCHAR(100)
DECLARE #TableOne VARCHAR(20) = 'TableOne'
DECLARE #TableTwo VARCHAR(20) = 'TableTwo'
DECLARE #SomeInt INT
SET #SQL = 'INSERT INTO '
IF (#SomeInt = 1)
SET #SQL = #SQL + #TableOne
IF (#SomeInt = 2)
SET #SQL = #SQL + #TableTwo
SET #SQL = #SQL + ' VALUES....etc'
EXEC (#SQL)
However, something you should really watch out for when using this method is a security problem called SQL Injection.
You can read up on that here.
One way to guard against SQL injection is to validate against it in your code before passing the variables to SQL-Server.
An alternative way (or probably best used in conjecture) is instead of using the EXEC command, use a built-in stored procedure called sp_executesql.
Details can be found here and usage description is here.
You'll have to build your SQL slightly differently and pass your parameters to the stored procedure as arguments as well as the #SQL.

Using sp_executesql with params complains of the need to declare a variable

I am attempting to make a stored procedure that uses sp_executesql. I have looked long and hard here, but I cannot see what I am doing incorrectly in my code. I'm new to stored procedures/sql server functions in general so I'm guessing I'm missing something simple. The stored procedure alter happens fine, but when I try run it I'm getting an error.
The error says.
Msg 1087, Level 15, State 2, Line 3
Must declare the table variable "#atableName"
The procedure looks like this.
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER PROCEDURE [dbo].[sp_TEST]
#tableName varchar(50),
#tableIDField varchar(50),
#tableValueField varchar(50)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQLString nvarchar(500);
SET #SQLString = N'SELECT DISTINCT #aTableIDField FROM #atableName';
EXEC sp_executesql #SQLString,
N'#atableName varchar(50),
#atableIDField varchar(50),
#atableValueField varchar(50)',
#atableName = #tableName,
#atableIDField = #tableIDField,
#atableValueField = #tableValueField;
END
And I'm trying to call it with something like this.
EXECUTE sp_TEST 'PERSON', 'PERSON.ID', 'PERSON.VALUE'
This example isn't adding anything special, but I have a large number of views that have similar code. If I could get this stored procedure working I could get a lot of repeated code shrunk down considerably.
Thanks for your help.
Edit: I am attempting to do this for easier maintainability purposes. I have multiple views that basically have the same exact sql except the table name is different. Data is brought to the SQL server instance for reporting purposes. When I have a table containing multiple rows per person id, each containing a value, I often need them in a single cell for the users.
You can not parameterise a table name, so it will fail with #atableName
You need to concatenate the first bit with atableName, which kind defeats the purpose fo using sp_executesql
This would work but is not advisable unless you are just trying to learn and experiment.
ALTER PROCEDURE [dbo].[sp_TEST]
#tableName varchar(50),
#tableIDField varchar(50),
#tableValueField varchar(50)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQLString nvarchar(500);
SET #SQLString = N'SELECT DISTINCT ' + quotename(#TableIDField) + ' FROM ' + quotename(#tableName);
EXEC sp_executesql #SQLString;
END
Read The Curse and Blessings of Dynamic SQL
You cannot use variables to pass table names and column names to a dynamic query as parameters. Had that been possible, we wouldn't actually have used dynamic queries for that!
Instead you should use the variables to construct the dynamic query. Like this:
SET #SQLString = N'SELECT DISTINCT ' + QUOTENAME(#TableIDField) +
' FROM ' + QUOTENAME(#TableName);
Parameters are used to pass values, typically for use in filter conditions.

Temporary function or stored procedure in T-SQL

Is there any chance to create temporary stored procedure or function on MS SQL 2005? I would like to use this stored procedure only in my query so after execution it will be gone.
I have a query I would like to EXEC against some data. But for every table I will process this command, I need to change some parts of it. So I thought I would create temporary SP that would return for me a query from arguments I provide (like table name and so on) and than execute this query by EXEC.
And this stored procedure will be not useful for me later so I would like to have it temporary so that when I end executing my query - it will disappear.
This question is a bit old, but the other answers failed to provide the syntax for creating temporary procedures. The syntax is the same as for temporary tables: #name for local temporary objects, ##name for global temporary objects.
CREATE PROCEDURE #uspMyTempProcedure AS
BEGIN
print 'This is a temporary procedure'
END
This is described in the "Procedure Name" section of the official documentation. http://technet.microsoft.com/en-us/library/ms187926%28v=sql.90%29.aspx
I'm using this technique to deduplicate the code for my primitive T-SQL unit tests. A real unit testing framework would be better, but this is better than nothing and "garbage collects" after itself.
Re your edit - it sounds like you should be using sp_ExecuteSQL against a (parameterized) nvarchar that contains TSQL.
Search on sp_ExecuteSQL; a simple example:
DECLARE #SQL nvarchar(4000),
#Table varchar(20) = 'ORDERS',
#IDColumn varchar(20) = 'OrderID',
#ID int = 10248
SET #SQL = 'SELECT * FROM [' + #Table + '] WHERE ['
+ #IDColumn + '] = #Key'
EXEC sp_executesql #SQL, N'#Key int', #ID
Note that table and column names must be concatenated into the query, but values (such as #Key) can be parameterized.
There is a temporary stored procedure - but it is per connection, not per sp.
However, you might want to look at Common Table Expressions - they may be what you are after (although you can only read from them once).
Maybe if you can clarify what you are trying to do?
Just use the SQL of the stored proc inside your query. No need to create a stored procedure inside the DB, it won't give you any advantage over a normal query inside your query.

Resources