tSQLt - Handling multiple SQL Server result sets - sql-server

I'm creating a SQL Server unit test using tSQLt.
The proc that I'm testing returns 3 result sets. My webAPI handles the multiple result sets and sends it to the UI fine.
Question: In my SQL Server unit test, how do I handle the 3 result sets? If the proc returns one result set, it is easy to handle. I use the following:
Insert Into #ReturnData
(
ID,
Data1,
Data2
)
Exec #Ret = StoreProcName
Then I can run a bunch of checks against the #ReturnData temp table. But I don't understand how to handle/test a proc if it returns multiple result sets. Is this even possible?
Thanks.

The method I'd suggest you use is tSQLt.ResultSetFilter(). This takes a parameter for number of the result set to return and calls your code under test (StoreProcName in your example), returning that result set, which you can then use Insert..Exec to capture.
The down side of this procedure is that it only captures that one result set per run - so you need to call it multiple times to return all of the result sets. I usually only look at one result set per test, allowing me to concentrate on answering one question in that test, but if your result sets interrelate and you need both to return for your test to be evaluated, then you will need to call tSQLt.ResultSetFilter and hence the code under test more than once in your test (the manual has more info on this situation)
As an aside, I have previously blogged about some unexpected behaviour I encountered when using insert..exec with SPs that return multiple identical result sets which may be of interest.

DaveGreen has the answer. But for completeness, I wanted to share this which expands on the basics: http://tsqlt.org/201/using-tsqlt-resultsetfilter/
If you call a stored procedure and need to pass in parameters, do the following:
Create a #Variable that holds the ‘exec …’ string with the parameter values embedded. Then you can do something like this:
Declare #Variable Varchar(max)
Set #Variable = ‘exec STOREDPROCNAME ‘’param1’’, ‘’param2’’’;
EXEC tSQLt.ResultSetFilter 2, #Variable
The number 2 specifies the second result set that is returned.
Nice and snappy ... ;-)

Related

SQL Injection, ignore first select command

I am trying to build a scenario that would allow me to expose additional data from a server (for case-demo purposes). The server calls a vulnerable SQL code:
EXEC my_storeProc '12345'
where "12345" (not the single quotes) is the parameter. This performs a SELECT statement. I would like to eliminate this execution and instead call my own select statement, however the server side code will only accept the first select statement called, contained within the aforementioned EXEC call. Calling the second statement is easy:
EXEC my_storeProc '12345 ' select * from MySecondTable--
(the -- at the end will block the closing single quote added by the server to prevent errors). My problem is that although there are 2 select statements, the server will only parse the first one. Is there a way to cancel the first EXEC call without throwing an error so that the second one would be taken instead? Perhaps even a UNION but there isn't much I can do with only one variable open to exploit (variable being 12345 in this case).
You have to think of how it will be executed, specifically you want it called so it doesn't raise an exception and put the kabosh on the whole statement. You can't set the result to always true with a proc call, so there is no real way escape the proc. Instead, you'll want to slip a second command in, Your desired code looks like;
exec my_Storeproc '1234'; select * from mysecondtable
So we need to close the quotes, and make a new statement. That would mean the string with the insert needs to be;
1234'; select * from mysecondtable where 1 = '1
There is a flaw in this, whatever command you are executing is not being returned to the UI. To get the data you'll have to add a server connection to the second command.
To make the second command unnecessary you would have to inject code into the proc, which is a non starter since the proc is already complied and sql injection relies on confusing the compiler as to what is data and what is commands. For a more verbose explanation of that check out this answer:
https://security.stackexchange.com/a/25710

Safely converting a function into a procedure in SQL Server

I've been converting an oracle schema to an sql server one and got the following error
Invalid use of a side-effecting operator 'SET COMMAND' within a function.
In my case modifying the database involved this
set #originalDateFirst = ##DateFirst;
set datefirst 1;
set #DayOfWeek = datepart(weekday,#DATE); -- 1 to 5 = Weekday
set datefirst originalDateFirst;
Ideally this wouldn't have modified the database but the datepart function uses static state.
I'm not really from a database background so was slightly baffled by this but reading other answers it looked like all I needed to do was swap the word function for procedure and I'd be away. However I then got the following error
Incorrect syntax near 'RETURNS'.
Reading around a bit about stored procedures aren't allowed to return anything they like - only integers. However the integers normally have the same semantics as a console application's return code - 0 is success and anything else is an error.
Luckily the type I wanted to return was an integer so fixing the next error:
Incorrect syntax near 'RETURNS'.
Involved just removing
RETURNS INTEGER
from the function/procedure. However I'm unsure if there are any weird side effects caused by this error code interpretation that will be outside of my control. The function actually just returns either 0 or 1 basically as a true or false flag (where 1 is true and 0 is false as you might expect). Therefore one of my return values would count as an 'error'.
What if any are the consequences of piggybacking on the return code of a procedure rather than using an out parameter? Is it just a bad practice? If it's safe to do this I'd certainly prefer to so I don't need to change any calling code.
This isn't an answer to your question as posed, but may be a better solution to the overall problem.
Rather than having to rely on a particular DATEFIRST setting, or changing the DATEFIRST setting, why not use an expression that always returns reliable results no matter what the DATEFIRST setting is.
For example, this expression:
select (DATEPART(weekday,GETDATE()) + 7 - DATEPART(weekday,'20140406')) % 7
always returns 1 on Mondays, 2 on Tuesdays, ..., 5 on Fridays. No matter what settings are in effect.
So, your entire original block of 4 lines of code could just be:
set #DayOfWeek = (DATEPART(weekday,#Date) + 7 -
DATEPART(weekday,'20140406')) % 7; -- 1 to 5 = Weekday
And now you should be able to continue writing it as a function rather than a stored procedure.
If it's safe to do this I'd certainly prefer to so I don't need to change any calling code.
Which you would have to do if you did change your function into a stored procedure. There's no syntax where you can look at the call and ever be in doubt of whether a stored procedure or a function is being invoked - they always use different syntaxes. A procedure is executed by being the first piece of text in a batch or by being preceded by the EXEC keyword and no parentheses.
A function, on the other hand, always has to have parentheses applied when calling it, and must appear as an expression within a larger statement (such as SELECT). You cannot EXEC a function, nor call one by it being the first piece of text in a batch.
An out param could be of (almost) any valid datatype, RETURN is always an int, not necessarily 0 or 1.
Because you can't use a procedure as a query source (it's not a table), to consume a return value from a procedure, declare a variable and exec the procedure like this:
create procedure p as
-- some code
return 13
go
declare #r int
exec #r = p
select #r
I wouldn't call it piggybacking, it's a regular way to return a success/error code for example. But how you interprete the return value is entirely up to calling code.
Functions, otoh, can be used as a query source, if table-valued, or as a scalar value in select list or where clause etc. But you can't modify data inside functions, and there are other restrictions with them (as you've learned already). Furthermore, functions can have nasty impact on performance (except the inline table-valued functions, they're pretty much safe to use).

ADO.Net and stored procedure output parameters

In Ado.net, the code is calling a stored procedure with input and output parameters.
I understand that if some of the input parameters are optional (have default values in the SP), the code doesn't need to define and send the parameters values unless needed to.
My question is:
Does the same apply to the optional output parameters? can the code ignore the optional (has a default value) SP output parameters?
I could have tested it myself but I don't have a working example right now, and I am short of time.
Thanks you.
Yes. If a parameter has a default value then it may be safely omitted, irrelevant of the parameter direction (INPUT or OUTPUT). The fact that the procedure is called from ADO.Net is entirely irrelevant. Eg:
create procedure usp_test
#a int = 1 output,
#b int = 2
as
begin
set #a = #b;
end
go
exec usp_test
Whether is safe to do from a business rules point of view (ie. ignoring an OUTPUT parameter returned value), is entirely up to the specifics of the procedure and your app.
EDIT: Turns out I was wrong here, but I'm going to leave my answer because the information on SqlParameter might be useful. Sorry for the inaccuracy though.
I don't believe so. You must send in an OUTPUT parameter and in ADO.NET this is accomplished by adding a SqlParameter with it's ParameterDirection property set to ParameterDirection.Output.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.direction.aspx
http://msdn.microsoft.com/en-us/library/system.data.parameterdirection.aspx

How to declare local variables in postgresql?

There is an almost identical, but not really answered question here.
I am migrating an application from MS SQL Server to PostgreSQL. In many places in code I use local variables so I would like to go for the change that requires less work, so could you please tell me which is the best way to translate the following code?
-- MS SQL Syntax: declare 2 variables, assign value and return the sum of the two
declare #One integer = 1
declare #Two integer = 2
select #One + #Two as SUM
this returns:
SUM
-----------
3
(1 row(s) affected)
I will use Postgresql 8.4 or even 9.0 if it contains significant fetaures that will simplify the translation.
Postgresql historically doesn't support procedural code at the command level - only within functions. However, in Postgresql 9, support has been added to execute an inline code block that effectively supports something like this, although the syntax is perhaps a bit odd, and there are many restrictions compared to what you can do with SQL Server. Notably, the inline code block can't return a result set, so can't be used for what you outline above.
In general, if you want to write some procedural code and have it return a result, you need to put it inside a function. For example:
CREATE OR REPLACE FUNCTION somefuncname() RETURNS int LANGUAGE plpgsql AS $$
DECLARE
one int;
two int;
BEGIN
one := 1;
two := 2;
RETURN one + two;
END
$$;
SELECT somefuncname();
The PostgreSQL wire protocol doesn't, as far as I know, allow for things like a command returning multiple result sets. So you can't simply map T-SQL batches or stored procedures to PostgreSQL functions.

Stored procedure output parameters in SQL Server Profiler

I've got a stored procedure with an int output parameter. If I run SQL Server Profiler, execute the stored procedure via some .Net code, and capture the RPC:Completed event, the TextData looks like this:
declare #p1 int
set #p1=13
exec spStoredProcedure #OutParam=#p1 output
select #p1
Why does it look like it's getting the value of the output parameter before executing the stored procedure?
The RPC:Completed event class indicates that a remote procedure call has been completed. So the output parameter is actually known at that point. See if tracing the RPC:Started shows you what you expect.
This is, no matter how you look at it, a bug. The intent of the SQL Profiler "TextData" is to enable someone to understand and repeat the stored procedure call. In this case, running this T-SQL can give you a completely different result, if the spStoredProcedure procedure has any logic dependent on the input value of the #OutParam parameter, where that value of "13" were somehow meaningful as an input value.
It's easy to see how it can be convenient (enables you to see the output values of the proc call, which would otherwise need to do with the "RPC Output Parameter" event), but it is effectively a "lie" as to what T-SQL equivalent was executed.
RELATED: I just came across an article from the Microsoft Customer Service and Support team - about another case where conversion of the RPC:Completed event's BinaryData into a displayable TextData value results in an inaccurate reproduction of the original RPC call - this time codepage issues:
http://blogs.msdn.com/b/psssql/archive/2008/01/24/how-it-works-conversion-of-a-varchar-rpc-parameter-to-text-from-a-trace-trc-capture.aspx
UPDATED: By experimenting with this, I found another peculiarity of the behaviour - the profiler will only use this incorrect initial SET if the input value for that parameter, in the RPC call, was Null. If a non-null value was provided (and the parameter, in .Net SqlClient, had direction "InputOutput"), then that initial SET holds the true input value, and not the resulting output value. But if the input was null, then the output value is set instead.
This observation supports the notion that this is simply a null-handling bug in the profiler RPC-to-TSQL display conversion.

Resources