Safely converting a function into a procedure in SQL Server - sql-server

I've been converting an oracle schema to an sql server one and got the following error
Invalid use of a side-effecting operator 'SET COMMAND' within a function.
In my case modifying the database involved this
set #originalDateFirst = ##DateFirst;
set datefirst 1;
set #DayOfWeek = datepart(weekday,#DATE); -- 1 to 5 = Weekday
set datefirst originalDateFirst;
Ideally this wouldn't have modified the database but the datepart function uses static state.
I'm not really from a database background so was slightly baffled by this but reading other answers it looked like all I needed to do was swap the word function for procedure and I'd be away. However I then got the following error
Incorrect syntax near 'RETURNS'.
Reading around a bit about stored procedures aren't allowed to return anything they like - only integers. However the integers normally have the same semantics as a console application's return code - 0 is success and anything else is an error.
Luckily the type I wanted to return was an integer so fixing the next error:
Incorrect syntax near 'RETURNS'.
Involved just removing
RETURNS INTEGER
from the function/procedure. However I'm unsure if there are any weird side effects caused by this error code interpretation that will be outside of my control. The function actually just returns either 0 or 1 basically as a true or false flag (where 1 is true and 0 is false as you might expect). Therefore one of my return values would count as an 'error'.
What if any are the consequences of piggybacking on the return code of a procedure rather than using an out parameter? Is it just a bad practice? If it's safe to do this I'd certainly prefer to so I don't need to change any calling code.

This isn't an answer to your question as posed, but may be a better solution to the overall problem.
Rather than having to rely on a particular DATEFIRST setting, or changing the DATEFIRST setting, why not use an expression that always returns reliable results no matter what the DATEFIRST setting is.
For example, this expression:
select (DATEPART(weekday,GETDATE()) + 7 - DATEPART(weekday,'20140406')) % 7
always returns 1 on Mondays, 2 on Tuesdays, ..., 5 on Fridays. No matter what settings are in effect.
So, your entire original block of 4 lines of code could just be:
set #DayOfWeek = (DATEPART(weekday,#Date) + 7 -
DATEPART(weekday,'20140406')) % 7; -- 1 to 5 = Weekday
And now you should be able to continue writing it as a function rather than a stored procedure.
If it's safe to do this I'd certainly prefer to so I don't need to change any calling code.
Which you would have to do if you did change your function into a stored procedure. There's no syntax where you can look at the call and ever be in doubt of whether a stored procedure or a function is being invoked - they always use different syntaxes. A procedure is executed by being the first piece of text in a batch or by being preceded by the EXEC keyword and no parentheses.
A function, on the other hand, always has to have parentheses applied when calling it, and must appear as an expression within a larger statement (such as SELECT). You cannot EXEC a function, nor call one by it being the first piece of text in a batch.

An out param could be of (almost) any valid datatype, RETURN is always an int, not necessarily 0 or 1.
Because you can't use a procedure as a query source (it's not a table), to consume a return value from a procedure, declare a variable and exec the procedure like this:
create procedure p as
-- some code
return 13
go
declare #r int
exec #r = p
select #r
I wouldn't call it piggybacking, it's a regular way to return a success/error code for example. But how you interprete the return value is entirely up to calling code.
Functions, otoh, can be used as a query source, if table-valued, or as a scalar value in select list or where clause etc. But you can't modify data inside functions, and there are other restrictions with them (as you've learned already). Furthermore, functions can have nasty impact on performance (except the inline table-valued functions, they're pretty much safe to use).

Related

Logic based on output of a scalar function with BIT datatype

I came across a stored procedure which does this:
DECLARE #DebugLogging BIT = dbo.fnIsDebugLoggingEnabled();
IF (#DebugLogging = 1)
BEGIN
-- some verbose logging information here...
END
based on whether a debug logging is enabled or not, which is an output given by scalar function dbo.fnIsDebugLoggingEnabled().
I checked the create script for that function and I think it will always return 0. Maybe I am wrong. Can someone check if I am thinking correctly or not?
I am thinking --> why would someone put efforts to do something for a condition that will never occur?
Script for the function is:
CREATE FUNCTION [dbo].[fnIsDebugLoggingEnabled]()
RETURNS BIT
AS
BEGIN
RETURN 0;
END
I ran select dbo.fnIsDebugLoggingEnabled() and it indeed returned 0.
Your observation is correct; the returned value will always be zero so the verbose logging will not occur.
I can't say why the developer chose to implement the debugging flag this way but it may be a deployment option where different versions of the function are created depending on the selection. Personally, I'd store configurable values like this in a table.

tSQLt - Handling multiple SQL Server result sets

I'm creating a SQL Server unit test using tSQLt.
The proc that I'm testing returns 3 result sets. My webAPI handles the multiple result sets and sends it to the UI fine.
Question: In my SQL Server unit test, how do I handle the 3 result sets? If the proc returns one result set, it is easy to handle. I use the following:
Insert Into #ReturnData
(
ID,
Data1,
Data2
)
Exec #Ret = StoreProcName
Then I can run a bunch of checks against the #ReturnData temp table. But I don't understand how to handle/test a proc if it returns multiple result sets. Is this even possible?
Thanks.
The method I'd suggest you use is tSQLt.ResultSetFilter(). This takes a parameter for number of the result set to return and calls your code under test (StoreProcName in your example), returning that result set, which you can then use Insert..Exec to capture.
The down side of this procedure is that it only captures that one result set per run - so you need to call it multiple times to return all of the result sets. I usually only look at one result set per test, allowing me to concentrate on answering one question in that test, but if your result sets interrelate and you need both to return for your test to be evaluated, then you will need to call tSQLt.ResultSetFilter and hence the code under test more than once in your test (the manual has more info on this situation)
As an aside, I have previously blogged about some unexpected behaviour I encountered when using insert..exec with SPs that return multiple identical result sets which may be of interest.
DaveGreen has the answer. But for completeness, I wanted to share this which expands on the basics: http://tsqlt.org/201/using-tsqlt-resultsetfilter/
If you call a stored procedure and need to pass in parameters, do the following:
Create a #Variable that holds the ‘exec …’ string with the parameter values embedded. Then you can do something like this:
Declare #Variable Varchar(max)
Set #Variable = ‘exec STOREDPROCNAME ‘’param1’’, ‘’param2’’’;
EXEC tSQLt.ResultSetFilter 2, #Variable
The number 2 specifies the second result set that is returned.
Nice and snappy ... ;-)

ADO.Net and stored procedure output parameters

In Ado.net, the code is calling a stored procedure with input and output parameters.
I understand that if some of the input parameters are optional (have default values in the SP), the code doesn't need to define and send the parameters values unless needed to.
My question is:
Does the same apply to the optional output parameters? can the code ignore the optional (has a default value) SP output parameters?
I could have tested it myself but I don't have a working example right now, and I am short of time.
Thanks you.
Yes. If a parameter has a default value then it may be safely omitted, irrelevant of the parameter direction (INPUT or OUTPUT). The fact that the procedure is called from ADO.Net is entirely irrelevant. Eg:
create procedure usp_test
#a int = 1 output,
#b int = 2
as
begin
set #a = #b;
end
go
exec usp_test
Whether is safe to do from a business rules point of view (ie. ignoring an OUTPUT parameter returned value), is entirely up to the specifics of the procedure and your app.
EDIT: Turns out I was wrong here, but I'm going to leave my answer because the information on SqlParameter might be useful. Sorry for the inaccuracy though.
I don't believe so. You must send in an OUTPUT parameter and in ADO.NET this is accomplished by adding a SqlParameter with it's ParameterDirection property set to ParameterDirection.Output.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.direction.aspx
http://msdn.microsoft.com/en-us/library/system.data.parameterdirection.aspx

"Catching" Errors from within a user defined function in SQL Server 2005

I have a function that takes a number as an input and converts it to a date. This number isn't any standard form of date number, so I have to manually subdivide portions of the number to various date parts, cast the date parts to varchar strings and then, concatenate and cast the strings to a new datetime object.
My question is how can I catch a casting failure and return a null or low-range value from my function? I would prefer for my function to "passively" fail, returning a default value, instead of returning a fail code to my stored procedure. TRY/CATCH statements apparently don't work form within functions (unless there is some type of definition flag that I am unaware of) and trying the standard '##Error <> 0' method doesn't work either.
Incidentally this sounds like it could be a scalar UDF. This is a performance disaster, as Alex's blog points out. http://sqlblog.com/blogs/alexander_kuznetsov/archive/2008/05/23/reuse-your-code-with-cross-apply.aspx
SELECT CASE WHEN ISDATE(#yourParameter) = 1
THEN CAST(#yourParameter AS DATETIME)
ELSE YourDefaultValue
END
Since the format is nonstandard it sounds to me like you are stuck with doing all the validation yourself, prior to casting. Making sure that the individual pieces are numeric, checking that the month is between 1 and 12, making sure it's not Feb 30, etc. If anything fails you return nothing.

How to declare local variables in postgresql?

There is an almost identical, but not really answered question here.
I am migrating an application from MS SQL Server to PostgreSQL. In many places in code I use local variables so I would like to go for the change that requires less work, so could you please tell me which is the best way to translate the following code?
-- MS SQL Syntax: declare 2 variables, assign value and return the sum of the two
declare #One integer = 1
declare #Two integer = 2
select #One + #Two as SUM
this returns:
SUM
-----------
3
(1 row(s) affected)
I will use Postgresql 8.4 or even 9.0 if it contains significant fetaures that will simplify the translation.
Postgresql historically doesn't support procedural code at the command level - only within functions. However, in Postgresql 9, support has been added to execute an inline code block that effectively supports something like this, although the syntax is perhaps a bit odd, and there are many restrictions compared to what you can do with SQL Server. Notably, the inline code block can't return a result set, so can't be used for what you outline above.
In general, if you want to write some procedural code and have it return a result, you need to put it inside a function. For example:
CREATE OR REPLACE FUNCTION somefuncname() RETURNS int LANGUAGE plpgsql AS $$
DECLARE
one int;
two int;
BEGIN
one := 1;
two := 2;
RETURN one + two;
END
$$;
SELECT somefuncname();
The PostgreSQL wire protocol doesn't, as far as I know, allow for things like a command returning multiple result sets. So you can't simply map T-SQL batches or stored procedures to PostgreSQL functions.

Resources