Postgres integer arrays as parameters? - arrays

I understand that in Postgres pure, you can pass an integer array into a function but that this isn't supported in the .NET data provider Npgsql.
I currently have a DbCommand into which I load a call to a stored proc, add in a parameter and execute scalar to get back an Id to populate an object with.
This now needs to take n integers as arguments. These are used to create child records linking the newly created record by it's id to the integer arguments.
Ideally I'd rather not have to make multiple ExecuteNonQuery calls on my DbCommand for each of the integers, so I'm about to build a csv string as a parameter that will be split on the database side.
I normally live in LINQ 2 SQL savouring the Db abstraction, working on this project with manual data access it's all just getting a bit dirty, how do people usually go about passing these kinds of parameters into postgres?

See: http://www.postgresql.org/docs/9.1/static/arrays.html
If your non-native driver still does not allow you to pass arrays, then you can:
pass a string representation of an array (which your stored procedure can then parse into an array -- see string_to_array)
CREATE FUNCTION my_method(TEXT) RETURNS VOID AS $$
DECLARE
ids INT[];
BEGIN
ids = string_to_array($1,',');
...
END $$ LANGUAGE plpgsql;
then
SELECT my_method(:1)
with :1 = '1,2,3,4'
rely on Postgres itself to cast from a string to an array
CREATE FUNCTION my_method(INT[]) RETURNS VOID AS $$
...
END $$ LANGUAGE plpgsql;
then
SELECT my_method('{1,2,3,4}')
choose not to use bind variables and issue an explicit command string with all parameters spelled out instead (make sure to validate or escape all parameters coming from outside to avoid SQL injection attacks.)
CREATE FUNCTION my_method(INT[]) RETURNS VOID AS $$
...
END $$ LANGUAGE plpgsql;
then
SELECT my_method(ARRAY [1,2,3,4])

I realize this is an old question, but it took me several hours to find a good solution and thought I'd pass on what I learned here and save someone else the trouble. Try, for example,
SELECT * FROM some_table WHERE id_column = ANY(#id_list)
where #id_list is bound to an int[] parameter by way of
command.Parameters.Add("#id_list", NpgsqlDbType.Array|NpgsqlDbType.Integer).Value = my_id_list;
where command is a NpgsqlCommand (using C# and Npgsql in Visual Studio).

You can always use a properly formatted string. The trick is the formatting.
command.Parameters.Add("#array_parameter", string.Format("{{{0}}}", string.Join(",", array));
Note that if your array is an array of strings, then you'll need to use array.Select(value => string.Format("\"{0}\", value)) or the equivalent. I use this style for an array of an enumerated type in PostgreSQL, because there's no automatic conversion from the array.
In my case, my enumerated type has some values like 'value1', 'value2', 'value3', and my C# enumeration has matching values. In my case, the final SQL query ends up looking something like (E'{"value1","value2"}'), and this works.

Full Coding Structure
postgresql function
CREATE OR REPLACE FUNCTION admin.usp_itemdisplayid_byitemhead_select(
item_head_list int[])
RETURNS TABLE(item_display_id integer)
LANGUAGE 'sql'
COST 100
VOLATILE
ROWS 1000
AS $BODY$
SELECT vii.item_display_id from admin.view_item_information as vii
where vii.item_head_id = ANY(item_head_list);
$BODY$;
Model
public class CampaignCreator
{
public int item_display_id { get; set; }
public List<int> pitem_head_id { get; set; }
}
.NET CORE function
DynamicParameters _parameter = new DynamicParameters();
_parameter.Add("#item_head_list",obj.pitem_head_id);
string sql = "select * from admin.usp_itemdisplayid_byitemhead_select(#item_head_list)";
response.data = await _connection.QueryAsync<CampaignCreator>(sql, _parameter);

Related

System.NotSupportedException: Commands with multiple queries cannot have out parameters

I ran into another issue with using a data reader around a sproc with multiple ref cursors coming out. I am getting a not supported exception. Unfortunately, i can see from where it is coming from the source code of npgsql however.. i am not sure if i agree with throwing that exception. The code we have written works with oracle (both fully managed and managed flavors), sql server. Any help appreciated to keep it consistent for an api across some of those key flavors of dbms out there.
sproc body
CREATE OR REPLACE FUNCTION public.getmultipleresultsets (
v_organizationid integer)
RETURNS Setof refcursor
LANGUAGE 'plpgsql'
AS $BODY$
declare public override void AddCursorOutParameter(DbCommand command,
string RefCursorName)
{
NpgsqlParameter parameter = (NpgsqlParameter)CreateParameter(RefCursorName, false);
parameter.NpgsqlDbType = NpgsqlDbType.Refcursor;
parameter.NpgsqlValue = DBNull.Value;
parameter.Direction = ParameterDirection.Output;
command.Parameters.Add(parameter);
}
cv_1 refcursor;
cv_2 refcursor;
BEGIN
open cv_1 for
SELECT a.errorCategoryId, a.name, a.bitFlag
FROM ErrorCategories a
ORDER BY name;
RETURN next cv_1;
open cv_2 for
SELECT *
FROM StgNetworkStats ;
RETURN next cv_2;
END;
$BODY$;
Key Reader code that wraps postgres sql (Entlib implementation of npgsql)
private IDataReader DoExecuteReader(DbCommand command, CommandBehavior cmdBehavior)
{
try
{
var sql = new StringBuilder();
using (var reader = command.ExecuteReader(CommandBehavior.SequentialAccess))
{
while (reader.Read())
{
sql.AppendLine($"FETCH ALL IN \"{ reader.GetString(0) }\";");
}
}
command.CommandText = sql.ToString();
command.CommandType = CommandType.Text;
IDataReader reader2 = command.ExecuteReader(cmdBehavior);
return reader2;
}
catch (Exception)
{
throw;
}
}
The command building code is shown below
Helper.InitializeCommand(cmd, 300, "getmultipleresultsets");
db.AddReturnValueParameter(cmd);
db.AddInParameter(cmd, "organizationId", DbType.Int32, ORGANIZATIONID);
db.AddCursorOutParameter(cmd, "CV_1");
db.AddCursorOutParameter(cmd, "CV_2
The code that adds the refcursor parameter goes something like this
You code above seems to garble the PostgreSQL function with the .NET client code attempting to read its result.
Regardless, your function is declared to return a set of refcursors - this is not the same as two output parameters; you seem to be confusing the name of the cursor (cursors have names, but not ints, for example) with the name of the parameter (int parameters do have names).
Please note that PostgreSQL does not actually have output parameters - a function always returns a single table, and that's it. PostgreSQL does have a function syntax with output parameters, but that is only a way to construct the schema of the output table. This is unlike SQL Server, which apparently can return both a table and a set of named output parameters. To facilitate portability, when reading results, if Npgsql sees any NpgsqlParameter with direction out, it will attempt to find a resultset with the name of the parameter and will simply populate the NpgsqlParameter's Value with the first row's value for that column. This practice has zero added value over simply reading the resultset yourself - it's just there for compatibility.
To sum it up, I'd suggest you read the refcursors with your reader and then fetch their results as appropriate.

Postgres function with jsonb parameters

I have seen a similar post here but my situation is slightly different from anything I've found so far. I am trying to call a postgres function with parameters that I can leverage in the function logic as they pertain to the jsonb query. Here is an example of the query I'm trying to recreate with parameters.
SELECT *
from edit_data
where ( "json_field"#>'{Attributes}' )::jsonb #>
'{"issue_description":"**my description**",
"reporter_email":"**user#generic.com**"}'::jsonb
I can run this query just fine in PGAdmin but all my attempts thus far to run this inside a function with parameters for "my description" and "user#generic.com" values have failed. Here is a simple example of the function I'm trying to create:
CREATE OR REPLACE FUNCTION get_Features(
p1 character varying,
p2 character varying)
RETURNS SETOF edit_metadata AS
$BODY$
SELECT * from edit_metadata where ("geo_json"#>'{Attributes}' )::jsonb #> '{"issue_description":**$p1**, "reporter_email":**$p2**}'::jsonb;
$BODY$
LANGUAGE sql VOLATILE
COST 100
ROWS 1000;
I know that the syntax is incorrect and I've been struggling with this for a day or two. Can anyone help me understand how to best deal with these double quotes around the value and leverage a parameter here?
TIA
You could use function json_build_object:
select json_build_object(
'issue_description', '**my description**',
'reporter_email', '**user#generic.com**');
And you get:
json_build_object
-----------------------------------------------------------------------------------------
{"issue_description" : "**my description**", "reporter_email" : "**user#generic.com**"}
(1 row)
That way there's no way you will input invalid syntax (no hassle with quoting strings) and you can swap the values with parameters.

How to retrieve multiple rows from stored procedure with Scala?

Say you have a stored procedure or function returning multiple rows, as discussed in How to return multiple rows from the stored procedure? (Oracle PL/SQL)
What would be a good way, using Scala, to "select * from table (all_emps);" (taken from URL above) and read the multiple rows of data that would be the result?
As far as I can see it is not possible to do this using Squeryl. Is there a scalaified tool like Squeryl that I can use, or do I have to drop to JDBC?
Functions that return tables are an Oracle specific feature, I doubt an ORM (be it Scala or even Java) would have support for such a proprietary extension.
So I think you're more or less on your own :).
Probably the easiest way is to use a plain JDBC java.sql.Statement and execute "select * from table (all_emps)" with the executeQuery method.
To address the second part of your question about a way to select from table in a more scala-esque way, I am using Slick. Quoting from their example documentation:
case class Coffee(name: String, supID: Int, price: Double)
implicit val getCoffeeResult = GetResult(r => Coffee(r.<<, r.<<, r.<<))
Database.forURL("...") withSession {
Seq(
Coffee("Colombian", 101, 7.99),
Coffee("Colombian_Decaf", 101, 8.99),
Coffee("French_Roast_Decaf", 49, 9.99)
).foreach(c => sqlu"""
insert into coffees values (${c.name}, ${c.supID}, ${c.price})
""").execute)
val sup = 101
val q = sql"select * from coffees where sup_id = $sup".as[Coffee]
// A bind variable to prevent SQL injection ^
q.foreach(println)
}
Though I am not sure how it's dealing (if at all) with stored procs/functions.

Calling a sql server stored procedure with output parameter when using Simple.Data

Using Simple.Data, I would like to get the result from an output parameter in a stored procedure. Let's say I have the following SP (disregard the uselessness of it):
CREATE PROCEDURE [dbo].[TestProc]
#InParam int,
#OutParam int OUTPUT
AS
BEGIN
SELECT #OutParam = #InParam * 10
END
When I call it with Simple.Data, I use the following code.
var db = Database.Open();
int outParam;
var result = db.TestProc(42, out outParam);
Console.WriteLine(outParam); // <-- == 0
Console.WriteLine(result.OutputValues["OutParam"]); // <-- == 420
It feels like outParam should contain the value, and not the OutputValues dictionary. So my question is: Is there a nicer way to get the result from OutParam in this particular case?
Unfortunately, out parameters are not supported by the dynamic binding infrastructure used by Simple.Data, as they are not supported in all CLR languages.
I am open to suggestions for better syntax, though.

Is it possible to create table-valued *methods* in a SQL CLR user-defined type?

I have a CLR UDT that would benefit greatly from table-valued methods, ala xml.nodes():
-- nodes() example, for reference:
declare #xml xml = '<id>1</id><id>2</id><id>5</id><id>10</id>'
select c.value('.','int') as id from #xml.nodes('/id') t (c)
I want something similar for my UDT:
-- would return tuples (1, 4), (1, 5), (1, 6)....(1, 20)
declare #udt dbo.FancyType = '1.4:20'
select * from #udt.AsTable() t (c)
Does anyone have any experience w/ this? Any help would be greatly appreciated. I've tried a few things and they've all failed. I've looked for documentation and examples and found none.
Yes, I know I could create table-valued UDFs that take my UDT as a parameter, but I was rather hoping to bundle everything inside a single type, OO-style.
EDIT
Russell Hart found the documentation states that table-valued methods are not supported, and fixed my syntax to produce the expected runtime error (see below).
In VS2010, after creating a new UDT, I added this at the end of the struct definition:
[SqlMethod(FillRowMethodName = "GetTable_FillRow", TableDefinition = "Id INT")]
public IEnumerable GetTable()
{
ArrayList resultCollection = new ArrayList();
resultCollection.Add(1);
resultCollection.Add(2);
resultCollection.Add(3);
return resultCollection;
}
public static void GetTable_FillRow(object tableResultObj, out SqlInt32 Id)
{
Id = (int)tableResultObj;
}
This builds and deploys successfully. But then in SSMS, we get a runtime error as expected (if not word-for-word):
-- needed to alias the column in the SELECT clause, rather than after the table alias.
declare #this dbo.tvm_example = ''
select t.[Id] as [ID] from #this.GetTable() as [t]
Msg 2715, Level 16, State 3, Line 2
Column, parameter, or variable #1: Cannot find data type dbo.tvm_example.
Parameter or variable '#this' has an invalid data type.
So, it seems it is not possible after all. And even if were possible, it probably wouldn't be wise, given the restrictions on altering CLR objects in SQL Server.
That said, if anyone knows a hack to get around this particular limitation, I'll raise a new bounty accordingly.
You have aliased the table but not the columns. Try,
declare #this dbo.tvm_example = ''
select t.[Id] as [ID] from #this.GetTable() as [t]
According to the documentation, http://msdn.microsoft.com/en-us/library/ms131069(v=SQL.100).aspx#Y4739, this should fail on another runtime error regarding incorrect type.
The SqlMethodAttribute class inherits from the SqlFunctionAttribute class, so SqlMethodAttribute inherits the FillRowMethodName and TableDefinition fields from SqlFunctionAttribute. This implies that it is possible to write a table-valued method, which is not the case. The method compiles and the assembly deploys, but an error about the IEnumerable return type is raised at runtime with the following message: "Method, property, or field '' in class '' in assembly '' has invalid return type."
They may be avoiding supporting such a method. If you alter the assembly with method updates this can cause problems to the data in UDT columns.
An appropriate solution is to have a minimal UDT, then a seperate class of methods to accompany it. This will ensure flexibility and fully featured methods.
xml nodes method will not change so it is not subject to the same implemetation limitations.
Hope this helps Peter and good luck.

Resources