Declare table dependency in stored procedure when using T-SQL - sql-server

I am going to use dynamic sql in my stored procedure to remove some of the code duplication. But I see one big drawback for me in this case: I have rather big DB with lots of objects. My stored procedure is using few tables and since it is compiled I can find dependencies easily from Sql Server management studio.But when I rewrite it to dynamically build some of the repeating queries I will loose dependency possibility and next time when I would need to find who is using this table I will need to do raw text search in my code repository rather than asking sql server for dependency. This is probably small concern, but I would still like to try to find solution.
So my question is: is there anything I can do to still have ability to see what dependencies my stored proc has? Like declare some dependencies upfront etc?

You can get dependencies to show up for the stored procedure for sections of code that never execute. For example, if you wanted to "declare" a dependency on a table named TestTable, you could use
CREATE PROC MyStoredProc
AS
DECLARE #SQL VarChar(4000)
SET #SQL = 'SELECT * FROM TestTable'
EXEC (#SQL)
RETURN
SELECT 0 FROM TestTable -- Never executes but shows as dependency
END

Related

What is "dummy" in CREATE PROCEDURE statement

I'm doing investigation of code repo and find one thing that make me confused. SQL Server stored procedures are contained in a repo as a set of queries with following structure:
IF OBJECT_ID(N'[dbo].[sp_ProcTitle]', N'P') IS NULL
BEGIN
EXEC dbo.sp_executeSQL N'CREATE PROCEDURE [dbo].[sp_ProcTitle] AS dummy:;';
END
ALTER PROCEDURE dbo.sp_ProcTitle
#ParamOne int,
#ParamTwo date,
#ParamThree int
AS
SET NOCOUNT ON
-- some procedure body
END
Never before I saw AS dummy:; and now I'm a little confused, I can't find any good explanation what is it and how it works. Could anybody tell me what does it mean this statement? How it works? What is the reason to have it? Any thought would be good to hear. Or, please, advise me some link where I can find good explanation.
This is simply a label, such that could be used in a GOTO statement.
The word "dummy" is unimportant. It's simply trying to create the stored procedure if it doesn't exist, with a minimal amount of text. The content is then filled in with the ALTER.
Conceivably, the dummy text could later be searched for to see if any procedures were created and didn't have their content filled in, to check against failed deployments, etc.
Why do this? Well, it preserve the creation time of the stored procedure in metadata (which can be useful in administration or tracking down problems), and is compatible with versions of SQL Server that lack the CREATE OR ALTER... support.
This might make a little more sense if we add a little formatting to the CREATE:
CREATE PROCEDURE [dbo].[sp_ProcTitle]
AS
dummy:
This is, effectively, an empty procedure with a label called dummy. The user appears to be using this to ensure that the procedure exists first, and the ALTERing it. In older versions of SQL Server, such methods were needed because it didn't support CREATE OR ALTER syntax. As such, if you tried to ALTER a procedure that didn't exist the statement failed, and likewise if you try to CREATE a procedure that already exists it fails.
If you are on a recent version of SQL Server, I'd suggest changing to CREATE OR ALTER and getting rid of the call to sys.sp_executesql.

How to pass a table name as variable to a stored procedure

I have inherited a bunch of stored procedures basically as a shell and inside the quotes is this huge dynamic SQL with lots of conditions, calculations and case statements, however the table name in the FROM clause within this dynamic SQL changes every quarter.
Now before I get flamed, I like to simply say that I inherited them, how it was designed was before me. So each quarter when a call is made out to these stored procedures, it comes with the actual table name passed as a parameter and then the dynamic SQL concatenates the table name.
The problem with this approach is that, with each run over time, the prior designers simply tacked on more criteria as conditions and calculations. But the dynamic SQL string has a length limit to it. Further it becomes quite difficult to maintain and debug.
CREATE PROCEDURE .....
#dynSQL1 = 'SELECT......
FROM' + strTblName + '
WHERE.....
GROUP BY....'
...
EXEC #dynSQL1
GO
However, I like to ask you all, is there a way to turn this stored procedure with this huge dynamic SQL string into a plain vanilla stored procedure based on a parameterized table name?
My main goal is two fold, one, get away from the long string as dynamic SQL and two, easier maintenance and debugging. I would like to think in the more current version of SQL Server from SQL Server 2016/2017 and on, this issue is addressed.
Your thoughts and suggestions is greatly appreciated.
~G
So each quarter when a call is made out to these stored procedures, it comes with the actual table name passed as a parameter and then the dynamic SQL concatenates the table name.
You could change the procedure to codegen other stored procedures instead of running dynamic SQL. EG:
CREATE PROCEDURE admin.RegenerateProcedures #tableName
as
begin
declare #ddl nvarchar(max) = '
create or alter procedure dbo.SomeProc
as
begin
SELECT......
FROM dbo.' + quotename(#tableName) + '
WHERE.....
GROUP BY....
end
'
EXEC ( #ddl )
. . .
end
GO

SQL Server Service Broker - Processing stored procedures in specific schema

I'm currently working on a new project where I was hoping to automate the execution of all stored procedures within a specific schema in a database dynamically. I'd like to be able to simply add the stored procedures to the specific schema (e.g. Build), and have a process that runs on a set schedule which simply iterates through all the stored procedures in the schema, and runs them in parallel.
We have an existing custom ETL system that we've built, that will let us setup a bunch of jobs, this currently relies on using multiple agent jobs, that pick up the stored procedures, and executes them. I'm hoping for our new project to use something better, and was thinking the Service Broker would be the answer.
I've investigated this product: http://partitiondb.com/go-parallel-sql/ which seems to provide some pre-built code that will allow me to queue up the procedures, however I can't extract the database they provide, apparently it has a corrupted header :-(
I'm new to the world of service brokers, and am having a little difficulty in working out how I could automatically get something to queue up a bunch of stored procedures that get executed in parallel.
Any help would be much appreciated.
Cheers
To paraphrase one of my favorite poets - "mo' tech, mo' problems". Service Broker is a great solution to asynchronous processing, but it doesn't seem like a good fit here. Specifically, if all you're looking to do is run a (possibly unknown) set of stored procedures on a schedule, dynamic SQL seems like a better fit to me. Something like (untested)
create or alter procedure dbo.gottaRunEmAll
as
begin
declare p cursor fast_forward, local for
select name
from sys.objects
where schema_id = schema_id('Build');
open p;
declare #name sysname, #sql nvarchar(max);
while(1=1)
begin
fetch next from p into #name;
if (##rowcount <> 0)
break;
set #sql = concat('exec [Build].', quotename(name), ';');
exec(#sql)
end
close p
deallocate p
end
Better (imo) would be to have the above procedure maintained explicitly to call the procedures that you want and how you want them.

Can a dynamic table be returned by a function or stored procedure in SQL Server?

I would like to call a stored procedure or user-defined function that returns a dynamic table that is created via a pivot expression. I don't know the number of columns up front.
Is this possible? (I am not interested in temporary tables)
You can do that via stored procedure as it can return any kind of table, question is what are you trying to achieve and what will you do with data that you have no idea about?
This cannot be done with functions (as the returned table structure must be pre-defined), but it can be done with a stored proceed. Some psuedo-code:
CREATE PROCEDURE Foo
As
DECLARE #Command
SET #Command = 'SELECT * from MyTable'
-- For debugging, work in an optional PRINT #Command statement
EXECUTE (#Command)
RETURN 0
When you run stored procedure Foo, it builds your query as a string in #Command, and then dynamically executes it without knowing anything about what is being queried or returned, and the data set returned by that EXECUTE statement is "passed back" to the process that called the procedures.
Build your query with care, this stuff can be really hard to debug. Depending on your implementation, it might be a source of SQL injection attacks (remember, the stored procedure really doesn't know what that dynamic query is going to do). For quick stuff, EXECUTE() works fine, but for safer and more useful (if elaborate) solutions, look into sp_ExecuteSQL.
Yes, you can do this from a Stored Procedure, but not from a user-defined Function. It is worth looking into the Table Value Function, I believe you can also return a dynamic table from there, but I have not used that myself.

How to use stored procedures within a DTS data transformation task?

I have a DTS package with a data transformation task (data pump). I’d like to source the data with the results of a stored procedure that takes parameters, but DTS won’t preview the result set and can’t define the columns in the data transformation task.
Has anyone gotten this to work?
Caveat: The stored procedure uses two temp tables (and cleans them up, of course)
Enter some valid values for the stored procedure parameters so it runs and returns some data (or even no data, you just need the columns). Then you should be able to do the mapping/etc.. Then do a disconnected edit and change to the actual parameter values (I assume you are getting them from a global variable).
DECLARE #param1 DataType1
DECLARE #param2 DataType2
SET #param1 = global variable
SET #param2 = global variable (I forget exact syntax)
--EXEC procedure #param1, #param2
EXEC dbo.proc value1, value2
Basically you run it like this so the procedure returns results. Do the mapping, then in disconnected edit comment out the second EXEC and uncomment the first EXEC and it should work.
Basically you just need to make the procedure run and spit out results. Even if you get no rows back, it will still map the columns correctly. I don't have access to our production system (or even database) to create dts packages. So I create them in a dummy database and replace the stored procedure with something that returns the same columns that the production app would run, but no rows of data. Then after the mapping is done I move it to the production box with the real procedure and it works. This works great if you keep track of the database via scripts. You can just run the script to build an empty shell procedure and when done run the script to put back the true procedure.
You would need to actually load them into a table, then you can use a SQL task to move it from that table into the perm location if you must make a translation.
however, I have found that if working with a stored procedure to source the data, it is almost just as fast and easy to move it to its destination at the same time!
Nope, I could only stored procedures with DTS by having them save the state in scrap tables.

Resources