How to log in T-SQL - sql-server

I'm using ADO.NET to access SQL Server 2005 and would like to be able to log from inside the T-SQL stored procedures that I'm calling. Is that somehow possible?
I'm unable to see output from the 'print'-statement when using ADO.NET and since I want to use logging just for debuging the ideal solution would be to emit messages to DebugView from SysInternals.

I think writing to a log table would be my preference.
Alternatively, as you are using 2005, you could write a simple SQLCLR procedure to wrap around the EventLog.
Or you could use xp_logevent if you wanted to write to SQL log

I solved this by writing a SQLCLR-procedure as Eric Z Beard suggested. The assembly must be signed with a strong name key file.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static int Debug(string s)
{
System.Diagnostics.Debug.WriteLine(s);
return 0;
}
}
}
Created a key and a login:
USE [master]
CREATE ASYMMETRIC KEY DebugProcKey FROM EXECUTABLE FILE =
'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll'
CREATE LOGIN DebugProcLogin FROM ASYMMETRIC KEY DebugProcKey
GRANT UNSAFE ASSEMBLY TO DebugProcLogin
Imported it into SQL Server:
USE [mydb]
CREATE ASSEMBLY SqlServerProject1 FROM
'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll'
WITH PERMISSION_SET = unsafe
CREATE FUNCTION dbo.Debug( #message as nvarchar(200) )
RETURNS int
AS EXTERNAL NAME SqlServerProject1.[StoredProcedures].Debug
Then I was able to log in T-SQL procedures using
exec Debug #message = 'Hello World'

You can either log to a table, by simply inserting a new row, or you can implement a CLR stored procedure to write to a file.
Be careful with writing to a table, because if the action happens in a transaction and the transaction gets rolled back, your log entry will disappear.

Logging from inside a SQL sproc would be better done to the database itself. T-SQL can write to files but it's not really designed for it.

There's the PRINT command, but I prefer logging into a table so you can query it.

You can write rows to a log table from within a stored procedure. As others have indicated, you could go out of your way to write to some text file or other log with CLR or xp_logevent, but it seems like you need more volume than would be practical for such uses.
The tough cases occur (and it's these that you really need your log for) when transactions fail. Since any logging that occurs during these transactions will be rolled back along with the transaction that they are part of, it is best to have a logging API that your clients can use to log errors. This can be a simple DAL that either logs to the same database, or to a shared one.

For what it's worth, I've found that when I don't assign an InfoMessage handler to my SqlConnection:
sqlConnection.InfoMessage += new SqlInfoMessageEventHandler(MySqlConnectionInfoMessageHandler);
where the signature of the InfoMessageHandler looks like this:
MySqlConnectionInfoMessageHandler(object sender, SqlInfoMessageEventArgs e)
then my PRINT statements in my Stored Procs do not appear in DbgView.

You could use output variables for passing back messages, but that relies on the proc executing without errors.
create procedure usp_LoggableProc
#log varchar(max) OUTPUT
as
-- T-SQL statement here ...
select #log = #log + 'X is foo'
And then in your ADO code somehwere:
string log = (string)SqlCommand.Parameters["#log"].Value;
You could use raiserror to create your own custom errors with the information that you require and that will be available to you through the usual SqlException Errors collection in your ADO code:
RAISERROR('X is Foo', 10, 1)
Hmmm but yeah, can't help feeling just for debugging and in your situation, just insert varchar messages to an error table like the others have suggested and select * from it when you're debugging.

You may want to check Log4TSQL. It provides Database-Logging for Stored Procedures and Triggers in SQL Server 2005 - 2008. You have the possibility to set separate, independent log-levels on a per Procedure/Trigger basis.

Use cmd commands with cmdshell
I found this while searching for an answer to this question.
https://www.databasejournal.com/features/mssql/article.php/1467601/A-general-logging-t-sql-process-to-write-to-txt-files.htm
select #cmdtxt = "echo " + #logEntry + " >> drive:\path\filename.txt"
exec master..xp_cmdshell #cmdtxt

I've been searching for a way to do this, as I am trying to debug some complicated, chained, stored procedures, all that are called by an external API, and which operate in the context of a transaction.
I'd been writing diagnostic messages into a logging file, but if the transaction rolls back, the new log entries disappear with the rollback. I found a way! And it works pretty well. And it has already saved me many, many hours of debugging time.
Create a linked server to the same SQL instance, using the login's
security context. In my case, the simplest method was to use the
localhost loop address, 127.0.0.1
Set the linked server to enable RPC, and to NOT "Enable Promotion of
Distributed Transactions". This means that calls through that
server will take place outside of your transaction context.
In your logging procedure, (I have an example excerpted below) write
to the log table using the procedure through loopback linked server
if you are in a transaction. You can write to it the usual way
if your are not. Writing though the linked server is considerably
slower than direct DML.
Voila! My in-process logging survives the rollback, and I can find out what's happening internally when things are going south.
I can't claim credit for thinking of this--I found the approach after some time with Google, but I'm so pleased with the result I felt like I had to share it.
USE TX
GO
CREATE PROCEDURE dbo.LogError(#errorSource Varchar(32), #msg Varchar(400))
AS BEGIN
SET NOCOUNT ON
IF ##TRANCOUNT > 0
EXEC [127.0.0.1].TX.dbo.LogError #errorSource, #msg
ELSE
INSERT INTO TX.dbo.ErrorLog(source_module, message)
SELECT #errorSource, #msg
END
GO

Related

Does SQL Server deferred name resolution work for functions?

SQL Server has Deferred Name Resolution feature, read here for details:
https://msdn.microsoft.com/en-us/library/ms190686(v=sql.105).aspx
In that page, all it's talking is stored procedure so it seems Deferred Name Resolution only works for stored procedures and not for functions and I did some testing.
create or alter function f2(#i int)
returns table
as
return (select fff from xxx)
go
Note the table xxx does not exist. When I execute the above CREATE statement, I got the following message:
Msg 208, Level 16, State 1, Procedure f2, Line 4 [Batch Start Line 22]
Invalid object name 'xxx'.
It seems that SQL Server instantly found the non-existent table xxx and it proved Deferred Name Resolution doesn't work for functions. However when I slightly change it as follows:
create or alter function f1(#i int)
returns int
as
begin
declare #x int;
select #x = fff from xxx;
return #x
end
go
I can successfully execute it:
Commands completed successfully.
When executing the following statement:
select dbo.f1(3)
I got this error:
Msg 208, Level 16, State 1, Line 34
Invalid object name 'xxx'.
So here it seems the resolution of the table xxx was deferred. The most important differences between these two cases is the return type. However I can't explain when Deferred Name Resolution will work for functions and when not. Can anyone help me to understand this? Thanks in advance.
It feels like you were looking for understanding of why your particular example didn't work. Quassnoi's answer was correct but didn't offer a reason so I went searching and found this MSDN Social answer by Erland Sommarskog. The interesting part:
However, it does not extend to views and inline-table functions. For
stored procedures and scalar functions, all SQL Server stores in the
database is the text of the module. But for views and inline-table
functions (which are parameterised view by another name) SQL Server
stores metadata about the columns etc. And that is not possible if the
table is missing.
Hope that helps with understanding why :-)
EDIT:
I did take some time to confirm Quassnoi's comment that sys.columns as well as several other tables did contain some metadata about the inline function so I am unsure if there is other metadata not written. However I thought I would add a few other notes I was able to find that may help explain in conjunction.
First a quote from Wayne Sheffield's blog:
In the MTVF, you see only an operation called “Table Valued Function”. Everything that it is doing is essentially a black box – something is happening, and data gets returned. For MTVFs, SQL can’t “see” what it is that the MTVF is doing since it is being run in a separate context. What this means is that SQL has to run the MTVF as it is written, without being able to make any optimizations in the query plan to optimize it.
Then from the SQL Server 2016 Exam 70-761 by Itzik Ben-Gan (Skill 3.1):
The reason that it's called an inline function is because SQL Server inlines, or expands, the inner query definition, and constructs an internal query directly against the underlying tables.
So it seems the inline function essentially returns a query and is able to optimize it with the outer query, not allowing the black-box approach and thus not allowing deferred name resolution.
What you have in your first example is an inline function (it does not have BEGIN/END).
Inline functions can only be table-valued.
If you used a multi-statement table-valued function for you first example, like this:
CREATE OR ALTER FUNCTION
fn_test(#a INT)
RETURNS #ret TABLE
(
a INT
)
AS
BEGIN
INSERT
INTO #ret
SELECT a
FROM xxx
RETURN
END
, it would compile alright and fail at runtime (if xxx would not exist), same as a stored procedure or a scalar UDF would.
So yes, DNR does work for all multi-statement functions (those with BEGIN/END), regardless of their return type.

Track or log calls to a user-defined function in SQL Server

Understanding that side-effecting operators (like "insert") are disallowed in user-defined functions, how does one log (or otherwise track) calls to a specific user-defined function? I'd also like to capture the parameters passed into the UDF.
Ideally, the log would be a table into which information (time stamp and parameter values) about each call to the UDF is inserted. Reports and usage metrics could then be derived from that table.
I can't rewrite the UDF as a stored procedure, even of the same name, without breaking many downstream systems that are out in the wild that expect a UDF and that I have no control over.
Nor am I willing to enable any type of command shell features on our server that will diminish SQL Server's best-practice security defaults.
I found solution of your problem. It’s a little bit tricky and looks like a hack, but it seems it’s impossible to solve in another way.
The idea is to create a .NET SQL function which logs data where you need (file, Windows EventLog, db and so on), next create SQL UDF which calls this .NET function and finally call this SQL function from your functions passing all parameters needed to be logged. SQL Server doesn't check what is inside .net function and you can write there all logic you need.
The idea of how to create a .net SQL function without any security limitations is taken from this post.
So, create a .net library project with this one file
using System;
namespace SqlTest
{
public class LogEvent
{
[Microsoft.SqlServer.Server.SqlFunction]
public static int Log(string data)
{
System.IO.File.AppendAllText(#"C:\Log\LogUDF.txt", data);
return 0;
}
}
}
Sign it with some pfx certificate (project properties -> signing tab).
Next, call this query
USE [master]
CREATE ASYMMETRIC KEY LogKey FROM EXECUTABLE FILE =
'C:\Work\ConsoleApplication1\SqlTest\bin\Debug\SqlTest.dll'
CREATE LOGIN LogLogin FROM ASYMMETRIC KEY LogKey
GRANT UNSAFE ASSEMBLY TO LogLogin
GO
USE [MyDB]
CREATE ASSEMBLY SqlTest FROM
'C:\Work\ConsoleApplication1\SqlTest\bin\Debug\SqlTest.dll'
WITH PERMISSION_SET = unsafe
GO
CREATE FUNCTION dbo.Log( #data as nvarchar(200) )
RETURNS int
AS EXTERNAL NAME SqlTest.[SqlTest.LogEvent].Log
Here you need to change path to your compiled library, MyDB - your database name.
And you will create dbo.Log SQL function. Next you can call it where you need. For example like from this TestFunction
CREATE FUNCTION TestFunction
(
#p1 int
)
RETURNS int
AS
BEGIN
DECLARE #temp int
SELECT #temp = [dbo].[Log] ('fff')
RETURN 1
END
So, calling SELECT TestFunction(1) will write 'fff' text to C:\Log\LogUDF.txt file.
That’s it. A few important notes:
SQL server should have permissions (login/user) to write into file C:\Log\LogUDF.txt.
You should be SQL server admin
You can try the following:
1) Use SQL Profiler to check caught data for each of your different scenarios
Check SP:StmtCompleted to ensure that you catch the statements that execute within the stored procedure or used defined functions. Also make sure you include all required columns (TextData, LoginName, ApplicationName etc.). TextData is essential for this solution.
2) Check each scenario to see what you receive in the profiler. E.g.:
-- a mock function that is similar to what I have understood your function does
alter FUNCTION dbo.GetLoginResult(#username VARCHAR(64))
RETURNS INT
AS
BEGIN
DECLARE #l INT = LEN(#username)
IF (#l < 10)
RETURN 0
RETURN 1
-- DECLARE #Result INT
-- SELECT #Result = DATEPART(s, GETDATE()) % 3
-- RETURN #Result
END
go
select dbo.GetLoginResult('SomeGuy') --> `IF (#l < 10)` and `RETURN 0`
GO
select dbo.GetLoginResult('Some girl with a long name') --> `IF (#l < 10)` and `RETURN 1`
GO
So, if you can adapt your function to be written in a such a way that a specific instruction is executed when a particular output is about to be returned, you can recognize what is the result of the function based on the profiled information (as input and output values do not seem to be caught in the profiler)
3) Server-side tracing
As already suggested, SQL Profiler puts significant overhead, so you should use server-side tracing. Luckly, you can export just created profiling information as indicated here:
i) SQL Profiler -> File -> Export -> Script Trace Definition -> For SQL Server ..
ii) Replace the path in the generated script and run it -> remember generated id (it is trace id)
iii) you can open the file in the profiler and export its data to a table, after stopping the trace (it is locked by sqlserver process).

Dynamic SQL without having to use fully qualified table names in SQL (Openrowset?)

I have a large set of pre-existing sql select statements.
From a stored procedure on [Server_A], I would like to execute each of these statements on multiple different SQL Servers & Databases (the list is stored in a local table on [Server_A] , and return the results into a table on [Server_A].
However, I do not want to have to use fully qualified table names in my sql statements. I want to execute "select * from users", not "select * from ServerName.DatabaseName.SchemaName.Users"
I've investigated using Openrowset, but I am unable to find any examples where both the Server name and DatabaseName can be specified as an attribute of the connection, rather than physically embedded within the actual SQL statement.
Is Openrowset capable of this? Is there an alternate way of doing this (from within a stored procedure, as opposed to resorting to Powershell or some other very different approach?)
The inevitable "Why do I want to do this?"
You can do it (specify the server and database in the connection
attributes and then use entirely generic sql across all databases) in
virtually every other language that accesses SQL Server.
Changing all my pre-existing complex SQL to be fully qualified is a
huge PITA (besides, you simply shouldn't have to do this)
This can be done quite easily via SQLCLR. If the result set is to be dynamic then it needs to be a Stored Procedure instead of a TVF.
Assuming you are doing a Stored Procedure, you would just:
Pass in #ServerName, #DatabaseName, #SQL
Create a SqlConnection with a Connection String of: String.Concat("Server=", ServerName.Value, "; Database=", DatabaseName.Value, "; Trusted_Connection=yes; Enlist=false;") or use ConnectionStringBuilder
Create a SqlCommand for that SqlConnection and using SQL.Value.
Enable Impersonation via SqlContext.WindowsIdentity.Impersonate();
_Connection.Open();
undo Impersonation -- was only needed to establish the connection
_Reader = Command.ExecuteReader();
SqlContext.Pipe.Send(_Reader);
Dispose of Reader, Command, Connection, and ImpersonationContext in finally clause
This approach is less of a security issue than enabling Ad Hoc Distributed Query access as it is more insulated and controllable. It also does not allow for a SQL Server login to get elevated permissions since a SQL Server login will get an error when the code executes the Impersonate() method.
Also, this approach allows for multiple result sets to be returned, something that OPENROWSET doesn't allow for:
Although the query might return multiple result sets, OPENROWSET returns only the first one.
UPDATE
Modified pseudo-code based on comments on this answer:
Pass in #QueryID
Create a SqlConnection (_MetaDataConnection) with a Connection String of: Context Connection = true;
Query _MetaDataConnection to get ServerName, DatabaseName, and Query based on QueryID.Value via SqlDataReader
Create another SqlConnection (_QueryConnection) with a Connection String of: String.Concat("Server=", _Reader["ServerName"].Value, "; Database=", _Reader["DatabaseName"].Value, "; Trusted_Connection=yes; Enlist=false;") or use ConnectionStringBuilder
Create a SqlCommand (_QueryCommand) for _QueryConnection using _Reader["SQL"].Value.
Using _MetaDataConnection, query to get parameter names and values based on QueryID.Value
Cycle through SqlDataReader to create SqlParameters and add to _QueryCommand
_MetaDataConnection.Close();
Enable Impersonation via SqlContext.WindowsIdentity.Impersonate();
_QueryConnection.Open();
undo Impersonation -- was only needed to establish the connection
_Reader = _QueryCommand.ExecuteReader();
SqlContext.Pipe.Send(_Reader);
Dispose of Readers, Commands, Connections, and ImpersonationContext in finally clause
If you want to execute a sql statement on every database in a instance you can use (the unsupported, unofficial, but widely used) exec sp_MSforeachdb like this:
EXEC sp_Msforeachdb 'use [?]; select * from users'
This will be the equivalent of going through every database through a
use db...
go
select * from users
This is an interesting problem because I googled for many, many hours, and found several people trying to do exactly the same thing as asked in the question.
Most common responses:
Why would you want to do that?
You can not do that, you must fully qualify your objects names
Luckily, I stumbled upon the answer, and it is brutally simple. I think part of the problem is, there are so many variations of it with different providers & connection strings, and there are so many things that could go wrong, and when one does, the error message is often not terribly enlightening.
Regardless, here's how you do it:
If you are using static SQL:
select * from OPENROWSET('SQLNCLI','Server=ServerName[\InstanceName];Database=AdventureWorks2012;Trusted_Connection=yes','select top 10 * from HumanResources.Department')
If you are using Dynamic SQL - since OPENROWSET does not accept variables as arguments, you can use an approach like this (just as a contrived example):
declare #sql nvarchar(4000) = N'select * from OPENROWSET(''SQLNCLI'',''Server=Server=ServerName[\InstanceName];Database=AdventureWorks2012;Trusted_Connection=yes'',''#zzz'')'
set #sql = replace(#sql,'#zzz','select top 10 * from HumanResources.Department')
EXEC sp_executesql #sql
Noteworthy: In case you think it would be nice to wrap this syntax up in a nice Table Valued function that accepts #ServerName, #DatabaseName, #SQL - you cannot, as TVF's resultset columns must be determinate at compile time.
Relevant reading:
http://blogs.technet.com/b/wardpond/archive/2005/08/01/the-openrowset-trick-accessing-stored-procedure-output-in-a-select-statement.aspx
http://blogs.technet.com/b/wardpond/archive/2009/03/20/database-programming-the-openrowset-trick-revisited.aspx
Conclusion:
OPENROWSET is the only way that you can 100% avoid at least some full-qualification of object names; even with EXEC AT you still have to prefix objects with the database name.
Extra tip: The prevalent opinion seems to be that OPENROWSET shouldn't be used "because it is a security risk" (without any details on the risk). My understanding is that the risk is only if you are using SQL Server Authentication, further details here:
https://technet.microsoft.com/en-us/library/ms187873%28v=sql.90%29.aspx?f=255&MSPPError=-2147217396
When connecting to another data source, SQL Server impersonates the login appropriately for Windows authenticated logins; however, SQL Server cannot impersonate SQL Server authenticated logins. Therefore, for SQL Server authenticated logins, SQL Server can access another data source, such as files, nonrelational data sources like Active Directory, by using the security context of the Windows account under which the SQL Server service is running. Doing this can potentially give such logins access to another data source for which they do not have permissions, but the account under which the SQL Server service is running does have permissions. This possibility should be considered when you are using SQL Server authenticated logins.

Why are table valued parameters to SQL Server stored procedures required to be input READONLY?

Can anyone explain the design decision behind preventing table valued parameters from being specified as output parameters to stored procedures?
I can't count the number of times I've started building out a data model hoping to completely lock down my tables to external access (you know...implementation details), grant applications access to the database through stored procedures only (you know... the data interface) and communicate back and forth with TVPs only to have SSMS call me naughty for having the audacity to think that I can use a user-defined table type as the transfer object between my data service and my application.
So someone please provide me a good reason why TVPs were designed to be readonly input parameters.
In the presentation on Optimizing Microsoft SQL Server 2008 Applications Using Table Valued Parameters, XML, and MERGE by Michael Rys he says. (at 32:52)
Note that in SQL Server 2008 table valued parameters are read only.
But as you notice we actually require you to write READONLY. So that
actually then means that at some point in the future maybe if you say
please, please please often enough we might be able to actually make
them writable as well at some point. But at the moment they are read
only.
Here is the connect item you should use to add your "please". Relax restriction that table parameters must be readonly when SPs call each other.
Srini Acharya made a comment on the connect item.
Allowing table valued parameters to be read/write involves quite a bit
of work on the SQL Engine side as well as client protocols. Due to
time/resource constraints as well as other priorirites, we will not be
able to take up this work as part of SQL Server 2008 release. However,
we have investigated this issue and have this firmly in our radar to
address as part of the next release of SQL Server.
Table-valued parameters have the following restrictions(source MSDN):
SQL Server does not maintain statistics on columns of table-valued
parameters.
Table-valued parameters must be passed as input READONLY
parameters to Transact-SQL routines. You cannot perform DML
operations such as UPDATE, DELETE, or INSERT on a table-valued
parameter in the body of a routine.
You cannot use a table-valued parameter as target of a SELECT INTO
or INSERT EXEC statement. A table-valued parameter can be in the
FROM clause of SELECT INTO or in the INSERT EXEC string or stored
procedure.
there are few options to over come this restriction one is
CREATE TYPE RTableType AS TABLE(id INT, NAME VARCHAR )
go
CREATE PROCEDURE Rproc #Rtable RTABLETYPE READONLY,
#id INT
AS
BEGIN
SELECT *
FROM #Rtable
WHERE ID = #id
END
go
DECLARE #Rtable RTABLETYPE
DECLARE #Otable RTABLETYPE
INSERT INTO #Rtable
VALUES (1,'a'),
(2,'b')
INSERT #Otable
EXEC Rproc
#Rtable,
2
SELECT *
FROM #Otable
through this you can get the table values out
With respect to (emphasis added):
So someone please provide me a good reason why TVPs were designed to be readonly input parameters.
I just posted a more detailed answer to this on DBA.StackExchange here:
READONLY parameters and TVP restrictions
But the summary of it goes like this:
According to this blog post ( TSQL Basics II - Parameter Passing Semantics ), a design goal of Stored Procedure OUTPUT parameters is that they merely mimic "by reference" behavior when the Stored Procedure completes successfully! But when there is an error that causes the Stored Procedure to abort, then any changes made to any OUTPUT parameters would not be reflected in the current value of those variables upon control returning to the calling process.
But when TVPs were introduced, they implemented them as truly passing by reference since continuing the "by value" model -- in which a copy of it is made to ensure that changes are lost if the Stored Procedure does not complete successfully -- would not be efficient / scalable, especially if a lot of data is being passed in through TVP.
So there is only one instance of the Table Variable that is the TVP, and any changes made to it within any Stored Procedure (if they were not restricted to being READONLY) would be immediately persisted and would remain, even if the Stored Procedure encountered an error. This violates the design goal stated at the beginning of this summary. And, there is no option for somehow tying changes made to a TVP to a transaction (even something handled automatically, behind the scenes) since table variables are not bound by transactions.
Hence, marking them as READONLY is the only way (at the moment) to maintain the design goal of Stored Procedure parameters such that they do not reflect changes made within the Stored Procedure unless: the parameter is declared as OUTPUT and the Stored Procedure complete successfully.
Be Forewarned. This code will not work. That is the problem
Note that all code was entered directly into post from memory. I may have a type wrong in the example or some similar error. It is just to demonstrate the technique that this would facilitate, which won't work with any version of SQL Server released at the time of this writing. So it doesn't really matter if it currently compiles or not.
I know this question is old by now, but perhaps someone coming across my post here might benefit from understanding why it's a big deal that TVPs can't be directly manipulated by a stored proc and read as output parameters by the calling client.
"How do you..." questions regarding OUTPUT TVPs have littered SQL Server forums for more than half a decade now. Nearly every one of them involves someone attempting some supposed workaround that completely misses the point of the question in the first place.
It is entirely non sequitur that you can "get a result set that matches a table type" by creating a Table Typed variable, Inserting into it and then returning a read from it. When you do that, the result set is still not a message. It is an ad hoc ResultSet that contains arbitrary columns that "just happen to match" a UDTT. What is needed is the ability for the following:
create database [Test]
create schema [Request]
create schema [Response]
create schema [Resources]
create schema [Services]
create schema [Metadata]
create table [Resources].[Foo] ( [Value] [varchar](max) NOT NULL, [CreatedBy] [varchar](max) NOT NULL) ON [PRIMARY]
insert into [Resources].[Foo] values("Bar", "kalanbates");
create type [Request].[Message] AS TABLE([Value] [varchar](max) NOT NULL)
create type [Response].[Message] AS TABLE([Resource] [varchar](max) NOT NULL, [Creator] [varchar](max) NOT NULL, [LastAccessedOn] [datetime] NOT NULL)
create PROCEDURE [Services].[GetResources]
(#request [Request].[Message] READONLY, #response [response].[Message] OUTPUT)
AS
insert into #response
select [Resource].[Value] [Resource]
,[Resource].[CreatedBy] [Creator]
,GETDATE() [LastAccessedOn]
inner join #request as [Request] on [Resource].[Value] = [Request].[Value]
GO
and have an ADO.NET client be able to say:
public IEnumerable<Resource> GetResources(IEnumerable<string> request)
{
using(SqlConnection connection = new SqlConnection("Server=blahdeblah;database=Test;notGoingToFillOutRestOfConnString")
{
connection.Open();
using(SqlCommand command = connection.CreateCommand())
{
command.CommandText = "[Services].[GetResources]"
command.CommandType = CommandType.StoredProcedure;
SqlParameter _request;
_request = command.Parameters.Add(new SqlParameter("#request","[request].[Message]");
_request.Value = CreateRequest(request,_request.TypeName);
_request.SqlDbType = SqlDbType.Structured;
SqlParameter response = new SqlParameter("#response", "[response].[Message]"){Direction = ParameterDirection.Output};
command.Parameters.Add(response);
command.ExecuteNonQuery();
return Materializer.Create<List<ResourceEntity>>(response).AsEnumerable(); //or something to that effect.
//The point is, messages are sent to and received from the database.
//The "result set" contained within response is not dynamic. It has a structure that can be *reliably* anticipated.
}
}
}
private static IEnumerable<SqlDataRecord> CreateRequest(IEnumerable<string> values, string typeName)
{
//Optimally,
//1)Call database stored procedure that executes a select against the information_schema to retrieve type metadata for "typeName", or something similar
//2)Build out SqlDataRecord from returned MetaData
//Suboptimally, hard code "[request].[Message]" metadata into a SqlMetaData collection
//for example purposes.
SqlMetaData[] metaData = new SqlMetaData[1];
metaData[0] = new SqlMetaData("Value", SqlDbType.Varchar);
SqlDataRecord record = new SqlDataRecord(metaData);
foreach(string value in values)
{
record.SetString(0,value);
yield return record;
}
}
The point here is that with this structure, the Database defines [Response].[Message],[Request].[Message], and [Services].[GetResource] as its Service Interface. Calling clients interact with "GetResource" by sending a pre-determined message type and receive their response in a pre-determined message type. Of course it can be approximated with an XML output parameter, you can somewhat infer a pre-determined message type by instituting tribal requirements that retrieval stored procedures must insert its response into a local [Response].[Message] Table Typed variable and then select directly out of it to return its results. But none of those techniques are nearly as elegant as a structure where a stored procedure fills a response "envelope" provided by the client with its payload and sends it back.
Still in 2020, SQL version "Microsoft SQL Azure (RTM) - 12.0.2000.8", I am not able to edit the Table value parameter within the Stored Procedure. So I did the work around by moving the data into Temp table and edited it.
ALTER PROCEDURE [dbo].[SP_APPLY_CHANGESET_MDTST09_MSG_LANG]
#CHANGESET AS [dbo].[MDTSTYPE09_MSG_LANG] READONLY
AS
BEGIN
SELECT * INTO #TCHANGESET FROM #CHANGESET
UPDATE #TCHANGESET SET DTST08_MSG_K = 0 WHERE ....
...............

Issue with parameters in SQL Server stored procedures

I remember reading a while back that randomly SQL Server can slow down and / or take a stupidly long time to execute a stored procedure when it is written like:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
SELECT something FROM myTable WHERE myColumn = #myParameter
END
The way to fix this error is to do this:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
DECLARE #newParameter INT
SET #newParameter = #myParameter
SELECT something FROM myTable WHERE myColumn = #newParameter
END
Now my question is firstly is it bad practice to follow the second example for all my stored procedures? This seems like a bug that could be easily prevented with little work, but would there be any drawbacks to doing this and if so why?
When I read about this the problem was that the same proc would take varying times to execute depending on the value in the parameter, if anyone can tell me what this problem is called / why it occurs I would be really grateful, I cant seem to find the link to the post anywhere and it seems like a problem that could occur for our company.
The problem is "parameter sniffing" (SO Search)
The pattern with #newParameter is called "parameter masking" (also SO Search)
You could always use the this masking pattern but it isn't always needed. For example, a simple select by unique key, with no child tables or other filters should behave as expected every time.
Since SQL Server 2008, you can also use the OPTIMISE FOR UNKNOWN (SO). Also see Alternative to using local variables in a where clause and Experience with when to use OPTIMIZE FOR UNKNOWN

Resources