We are working on migrating netezza to snowflake. Netezza stored procedures has a way, where it allows the call of procedure with any number of argument with the help of PROC_ARGUMENT_TYPES. Do we have similar function in snowflake as well?
Like
c:= PROC_ARGUMENT_TYPES.count;
returns the number of argument passed.
Please note: we are working on SQL stored procedures in Snowflake.
Snowflake does not allow procedures or UDFs with an arbitrary number of input parameters. However, it's possible to approximate this capability using any combination of procedure overloading, arrays, objects, and variants.
Here's one example that's using procedure overloading and variants. The first procedure has only the required parameters. The second procedure has the required parameters plus an additional parameter that accepts a variant.
If the calling SQL specifies two parameters, it will call the procedure (overload) with only two parameters in the signature. That procedure in turn just calls the main stored procedure specifying NULL for the third parameter and returns the results.
The main stored procedure with three inputs has a variant for the final input. It can accept an array or an object. An array requires positional awareness of the inputs. An object does not. An object allows passing name/value pairs.
create or replace procedure VARIABLE_SIGNATURE(REQUIRED_PARAM1 string, REQUIRED_PARAM2 string)
returns variant
language javascript
as
$$
var rs = snowflake.execute({sqlText:`call VARIABLE_SIGNATURE(?,?,null)`,binds:[REQUIRED_PARAM1, REQUIRED_PARAM1]});
rs.next();
return rs.getColumnValue(1);
$$;
create or replace procedure VARIABLE_SIGNATURE(REQUIRED_PARAM1 string, REQUIRED_PARAM2 string, OPTIONAL_PARAMS variant)
returns variant
language javascript
as
$$
var out = {};
out.REQUIRED_PARAM1 = REQUIRED_PARAM1;
out.REQUIRED_PARAM2 = REQUIRED_PARAM2;
out.OPTIONAL_PARAMS = OPTIONAL_PARAMS;
return out;
$$;
-- Call the SP overload different ways:
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2');
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', array_construct('PARAM3', 'PARAM4', 'PARAM5'));
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', object_construct('PARAM3_NAME', 'PARAM3_VALUE', 'PARAM10_NAME', 'PARAM10_VALUE'));
While these SPs are JavaScript, overloading and the use of arrays, objects, and variants works the same way for SQL Script stored procedures.
Some things I have noticed about valid notations for this in Snowflake.
To avoid maintaining stored procedure duplicate, overloaded versions, an alternative kludge to overloading might be to require the passing of some sort of a testable falsy variant or a NULL when no additional values are wanted.
-- Call the SP by passing a testable, falsy value:
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2'); -- This will fail fail without overloading with a matched, 2 string/varchar signature.
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', NULL); -- This will work.
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', ''::variant); -- This will work.
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', array_construct()); -- This will work.
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', object_construct()); -- This will work.
Of course, array_construct('PARAM3', 'PARAM4', 'PARAM5')) can also be written as parse_json('["PARAM3", "PARAM4", "PARAM5"]').
Similarly, object_construct('PARAM3_NAME', 'PARAM3_VALUE', 'PARAM10_NAME', 'PARAM10_VALUE') can be written also as parse_json('{"PARAM3_NAME": "PARAM3_VALUE", "PARAM10_NAME", "PARAM10_VALUE"}').
Neither of these alternatives gives us anything that useful unless you just like parse_json() more than the other two functions.
Also, I am not sure if this has always worked (maybe Greg Pavlik knows?), but the notation for these variant types can be abbreviated a little bit by constructing an object with {} or an array with [] and thus be made slightly cleaner and more readable.
To explore the notations that Snowflake will accept, here are examples of code that will work:
-- Call the SP using different notations:
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', (select array_construct('PARAM3', 'PARAM4', 'PARAM5'))); -- Make the notation awkward & hard to read.
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', (select ['PARAM3', 'PARAM4', 'PARAM5'])); -- Make the notation awkward & hard to read.
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', ['PARAM3', 'PARAM4', 'PARAM5']); -- This also works & is easy to read.
call VARIABLE_SIGNATURE('PARAM1', 'PARAM2', {'PARAM3_NAME': 'PARAM3_VALUE', 'PARAM10_NAME': 'PARAM10_VALUE'}); -- This also works & is easy to read.
Related
Is it possible in SQL Server to define a user defined function with fixed enumerable parameters?
Like many pre-defined functions in SQL Server like DATEDIFF that takes DAY, MONTH, etc as first parameter, but these are not char, or any other data types...
I think that it should be easy to find the answer on Internet, but I don't know what I should exactly search. 😅😅
SQL Server doesn't have constants or enums in that sense; parameters to functions or procedures require to pass in strings, numbers, or variables.
Yes, this is unlike the ability to use well-defined constants directly in built-in functions and other T-SQL structs. Maybe that's something we'll see in a future evolution of the language, but this is what we have today.
For things like DATEADD you are passing an identifier... note that these work:
SELECT DATEADD([DAY], 1, GETDATE());
SELECT DATEADD("DAY", 1, GETDATE());
But this will fail:
SELECT DATEADD('DAY', 1, GETDATE());
Interestingly this will also fail (just further evidence that this is being handled like an identifier):
SET QUOTED_IDENTIFIER OFF;
SELECT DATEADD("DAY", 1, GETDATE());
You can't write your own functions or procedures that take identifiers as input - they are always either interpreted as an implicitly converted string (as in EXEC sp_who active;) or they simply fail a parse check (as in the above). Input parameters to built-in and user-defined functions will take expressions, but procedures will not.
I had succeeded to create the following function in PG 8.4.x
CREATE OR REPLACE FUNCTION foo()
RETURNS VOID
AS $function$
BEGIN
select concat('a','b');
END;$function$
LANGUAGE plpgsql;
The function is created without any errors
But when I try to use the function I got :
select foo();
ERROR: function concat(unknown, unknown) does not exist
LINE 1: select concat('a','b') ^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: select concat('a','b')
CONTEXT: PL/pgSQL function "foo" line 2 at SQL statement
How comes that PG succeed to create a function that actually calls an unknown function? (CONCAT is supported only in PG 9.x)
The PLpgSQL checks only syntax of embedded SQL in validation time. The semantic - identifiers, functions, ... is checked immediately before first evaluation in run-time. You can search plpgsql_check extension. It does complete check of embedded SQL.
Because functions get compiled the first time you call them. Else it would not be possible to define a set of recursive functions where one calls the other :).
EDIT (thanks to Nick Barnes): Somewhat unrelated to the question there is a switch
SET check_function_bodies = true;
but this only enables basic syntax checks for PL/pgSQL functions. The binding will be done on first call nonetheless. Postgres will only attempt to resolve function / table names for LANGUAGE sql.
I'm creating a SQL Server unit test using tSQLt.
The proc that I'm testing returns 3 result sets. My webAPI handles the multiple result sets and sends it to the UI fine.
Question: In my SQL Server unit test, how do I handle the 3 result sets? If the proc returns one result set, it is easy to handle. I use the following:
Insert Into #ReturnData
(
ID,
Data1,
Data2
)
Exec #Ret = StoreProcName
Then I can run a bunch of checks against the #ReturnData temp table. But I don't understand how to handle/test a proc if it returns multiple result sets. Is this even possible?
Thanks.
The method I'd suggest you use is tSQLt.ResultSetFilter(). This takes a parameter for number of the result set to return and calls your code under test (StoreProcName in your example), returning that result set, which you can then use Insert..Exec to capture.
The down side of this procedure is that it only captures that one result set per run - so you need to call it multiple times to return all of the result sets. I usually only look at one result set per test, allowing me to concentrate on answering one question in that test, but if your result sets interrelate and you need both to return for your test to be evaluated, then you will need to call tSQLt.ResultSetFilter and hence the code under test more than once in your test (the manual has more info on this situation)
As an aside, I have previously blogged about some unexpected behaviour I encountered when using insert..exec with SPs that return multiple identical result sets which may be of interest.
DaveGreen has the answer. But for completeness, I wanted to share this which expands on the basics: http://tsqlt.org/201/using-tsqlt-resultsetfilter/
If you call a stored procedure and need to pass in parameters, do the following:
Create a #Variable that holds the ‘exec …’ string with the parameter values embedded. Then you can do something like this:
Declare #Variable Varchar(max)
Set #Variable = ‘exec STOREDPROCNAME ‘’param1’’, ‘’param2’’’;
EXEC tSQLt.ResultSetFilter 2, #Variable
The number 2 specifies the second result set that is returned.
Nice and snappy ... ;-)
In Ado.net, the code is calling a stored procedure with input and output parameters.
I understand that if some of the input parameters are optional (have default values in the SP), the code doesn't need to define and send the parameters values unless needed to.
My question is:
Does the same apply to the optional output parameters? can the code ignore the optional (has a default value) SP output parameters?
I could have tested it myself but I don't have a working example right now, and I am short of time.
Thanks you.
Yes. If a parameter has a default value then it may be safely omitted, irrelevant of the parameter direction (INPUT or OUTPUT). The fact that the procedure is called from ADO.Net is entirely irrelevant. Eg:
create procedure usp_test
#a int = 1 output,
#b int = 2
as
begin
set #a = #b;
end
go
exec usp_test
Whether is safe to do from a business rules point of view (ie. ignoring an OUTPUT parameter returned value), is entirely up to the specifics of the procedure and your app.
EDIT: Turns out I was wrong here, but I'm going to leave my answer because the information on SqlParameter might be useful. Sorry for the inaccuracy though.
I don't believe so. You must send in an OUTPUT parameter and in ADO.NET this is accomplished by adding a SqlParameter with it's ParameterDirection property set to ParameterDirection.Output.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.direction.aspx
http://msdn.microsoft.com/en-us/library/system.data.parameterdirection.aspx
Is it possible to have a table-valued function in T-SQL return a table with a variable number of columns?
The column names may simply be 1, 2, …, n.
Right now I have a "string split" function that returns a single-columned 1 x n table, and I pivot the table afterwards to an n x 1 table, but I'd rather streamline the process by returning the correct table format to begin with.
I intend to use a CLR procedure in C# for this function, I just don't know how to set up the user-defined function to return my data in the format I want: with a variable number of columns, dependent on the input string.
It is not possible to return a non-static Result Set from a Table-Valued Function (TVF), whether it be written in T-SQL or .NET / SQLCLR. Only Stored Procedures can dynamically create a Result Set.
Basically, any function needs to return a consistent result type, whether it is a scalar value or a collection (i.e. Result Set).
However, in a SQLCLR stored procedure, you can create a dynamic Result Set via SqlMetaData. As long as you don't have an explicit need to SELECT ... FROM it, then maybe a stored procedure would work.
Of course, you might also be able to get away with doing this in T-SQL, using dynamic SQL to construct a SELECT statement based on the output of your split function.
A lot of this comes down to the exact context in which this functionality needs to be used.