Dynamic variables within SQL - sql-server

Good morning guys,
I'm having a slight issue with a SQL stored procedure. I am posting these variables to the procedure via. a HTML form:
Post Params:
key: 'DocID_1' value: '18743921-3810-4516-AA6B-566DF1045BDE'
key: 'status_1' value: 'Ignore'
key: 'DocID_2' value: '228C53F6-FE71-4816-8865-B52C02114D54'
key: 'status_2' value: 'Ignore'
key: 'DocID_3' value: 'ABCFCD74-56A3-4B11-B2C4-8D5B827E5648'
key: 'status_3' value: 'Ignore'
key: 'DocID_4' value: 'CD3F7440-BC71-48D0-B358-9EA5366ACCDE'
key: 'status_4' value: 'Ignore'
key: 'rowCount' value: '4'
Which are basically a list of documents which need to be approved, ignored etc. In my stored procedure I've got a function which iterates up to the rowCount variables value. What I am trying to do is use a select statement to select each DocID_ passed to the procedure.
My select statement so far:
DECLARE #rowCount int = 4
DECLARE #DocID_1 nvarchar(80) = '18743921-3810-4516-AA6B-566DF1045BDE'
SELECT
num,
#DocID_ + num
FROM
dbo.ufn_getcount(1, #rowCount)
As you can see I am trying to concatenate #DocID_ and num, which is the number returned by my function. This returns an error which to be honest I expected:
Must declare the scalar variable "#DocID_"
After reviewing the above, is there any way to dynamically create variable names on the fly?

I'm not quite sure what the advantage of selecting #DocID_ + num (were that a valid operation) would be - from the code sample you've listed you'd seem likely to know which variable was being selected anyway?
This could be done with dynamic SQL but it wouldn't be simple and the possibility for SQL Injection if it wasn't properly constructed is significant. Instead I'd look at the underlying requirement and work out how better to implement it. Possible options -
Table valued parameters
Table variables / temporary tables
Amending your getcount function to return the value you're specifically looking for - we'd need to see it's code to be certain but I suspect it'd be possible.

Related

How to add fields dynamically to snowflake's object_construct function

I have a large table of data in Snowflake that contains many fields with a name prefix of 'RAW_'. In order to make my table more manageable, I wish to condense all of these 'RAW_' fields into just one field called 'RAW_KEY_VALUE', condensing all of it into a key-value object store.
It initially appeared that Snowflake's 'OBJECT_CONSTRUCT' function was going to be my perfect solution here. However, the issue with this function is that it requires a manual input/hard coding of the fields you wish to convert to a key-value object. This is problematic for me as I have anywhere from 90-120 fields I would need to manually place in this function. Additionally, these fields with a 'RAW_' prefix change all the time. It is therefore critical that I have a solution that allows me to dynamically add these fields and convert them to a key-value store. (I haven't tried creating a stored procedure for this yet but will if all else fails)
Here is a snippet of the data in question
create or replace table reviews(name varchar(50), acting_rating int, raw_comments varchar(50), raw_rating int, raw_co varchar(50));
insert into reviews values
('abc', 4, NULL, 1, 'NO'),
('xyz', 3, 'some', 1, 'haha'),
('lmn', 1, 'what', 4, NULL);
Below is the output I'm trying to achieve (using the manual input/hard coding approach with object_construct)
select
name ,
acting_rating ,
object_construct_keep_null ('raw_comments',raw_comments,'raw_rating',raw_rating,'raw_co',raw_co) as RAW_KEY_VALUE
from reviews;
The above produces this desired output below.
Please let me know if there are any other ways to approach here. I think if I was able to work out a way to add the relevant fields to the object_construct function dynamically, that would solve my problem.
You can do this with a JS UDF and object_construct(*):
create or replace function obj_with_prefix(PREFIX string, A variant)
returns variant
language javascript
as $$
let result = {};
for (key in A) {
if (key.startsWith(PREFIX))
result[key] = A[key];
}
return result
$$
;
Test:
with data(aa_1, aa_2, bb_1, aa_3) as (
select 1,2,3,4
)
select obj_with_prefix('AA', object_construct(*))
from data

Creating index on specific JSON value inside an object array

So let's say I have a varchar column in a table with some structure like:
{
"Response":{
"DataArray":[
{
"Type":"Address",
"Value":"123 Fake St"
},
{
"Type":"Name",
"Value":"John Doe"
}
]
}
}
And I want to create a persisted computed column on the "Value" field of the "DataArray" array element that contains a Type field that equals "Name". (I hope I explained that properly. Basically I want to index the people names on that structure).
The problem is that, unlike with other json objects, I can't use the JSON_VALUE function in a straightforward way to extract said value. I've no idea if this can be done, I've been dabbling with JSON_QUERY but so far I've no idea what to do.
Any ideas and help appreciated. Thanks!
You could achieve it using function:
CREATE FUNCTION dbo.my_func(#s NVARCHAR(MAX))
RETURNS NVARCHAR(100)
WITH SCHEMABINDING
AS
BEGIN
DECLARE #r NVARCHAR(100);
SELECT #r = Value
FROM OPENJSON(#s,'$.Response.DataArray')
WITH ([Type] NVARCHAR(100) '$.Type', [Value] NVARCHAR(100) '$.Value')
WHERE [Type] = 'Name';
RETURN #r;
END;
Defining table:
CREATE TABLE tab(
val NVARCHAR(MAX) CHECK (ISJSON(val) = 1),
col1 AS dbo.my_func(val) PERSISTED -- calculated column
);
Sample data:
INSERT INTO tab(val) VALUES (N'{
"Response":{
"DataArray":[
{
"Type":"Address",
"Value":"123 Fake St"
},
{
"Type":"Name",
"Value":"John Doe"
}
]
}
}');
CREATE INDEX idx ON tab(col1); -- creating index on calculated column
SELECT * FROM tab;
db<>fiddle demo
You could use a computed column with PATINDEX and index that:
CREATE TABLE foo (a varchar(4000), a_ax AS (IIF(PATINDEX('%bar%', a) > 0, SUBSTRING(a, PATINDEX('%bar%', a), 42), '')))
CREATE INDEX foo_x ON foo(a_ax)
You could use a scalar function as #Lukasz Szozda posted - it's a good solution for this.
The problem, however, with T-SQL scalar UDFs in computed columns is that they destroy the performance of any query that table is involved in. Not only does data modification (inserts, updates, deletes) slow down, any execution plans for queries that involve that table cannot leverage a parallel execution plan. This is the case even when the computed column is not referenced in the query. Even index builds lose the ability to leverage a parallel execution plan. Note this article: Another reason why scalar functions in computed columns is a bad idea by Erik Darling.
This is not as pretty but, if performance is important than this will get you the results you need without the drawbacks of a scalar UDF.
CREATE TABLE dbo.jsonStrings
(
jsonString VARCHAR(8000) NOT NULL,
nameTxt AS (
SUBSTRING(
SUBSTRING(jsonString,
CHARINDEX('"Value":"',jsonString,
CHARINDEX('"Type":"Name",',jsonString,
CHARINDEX('"DataArray":[',jsonString)+12))+9,8000),1,
CHARINDEX('"',
SUBSTRING(jsonString,
CHARINDEX('"Value":"',jsonString,
CHARINDEX('"Type":"Name",',jsonString,
CHARINDEX('"DataArray":[',jsonString)+12))+9,8000))-1)) PERSISTED
);
INSERT dbo.jsonStrings(jsonString)
VALUES
('{
"Response":{
"DataArray":[
{
"Type":"Address",
"Value":"123 Fake St"
},
{
"Type":"Name",
"Value":"John Doe"
}
]
}
}');
Note that, this works well for the structure you posted. It may need to be tweaked depending on what the JSON does and can look like.
A second (and better but more complex) solution would be to take the json path logic from Lukasz Szozda's scalar UDF and get it into a CLR. T-SQL scalar UDFs, when written correctly, do not have the aforementioned problems that T-SQL scalar UDFs do.

Postgres function with jsonb parameters

I have seen a similar post here but my situation is slightly different from anything I've found so far. I am trying to call a postgres function with parameters that I can leverage in the function logic as they pertain to the jsonb query. Here is an example of the query I'm trying to recreate with parameters.
SELECT *
from edit_data
where ( "json_field"#>'{Attributes}' )::jsonb #>
'{"issue_description":"**my description**",
"reporter_email":"**user#generic.com**"}'::jsonb
I can run this query just fine in PGAdmin but all my attempts thus far to run this inside a function with parameters for "my description" and "user#generic.com" values have failed. Here is a simple example of the function I'm trying to create:
CREATE OR REPLACE FUNCTION get_Features(
p1 character varying,
p2 character varying)
RETURNS SETOF edit_metadata AS
$BODY$
SELECT * from edit_metadata where ("geo_json"#>'{Attributes}' )::jsonb #> '{"issue_description":**$p1**, "reporter_email":**$p2**}'::jsonb;
$BODY$
LANGUAGE sql VOLATILE
COST 100
ROWS 1000;
I know that the syntax is incorrect and I've been struggling with this for a day or two. Can anyone help me understand how to best deal with these double quotes around the value and leverage a parameter here?
TIA
You could use function json_build_object:
select json_build_object(
'issue_description', '**my description**',
'reporter_email', '**user#generic.com**');
And you get:
json_build_object
-----------------------------------------------------------------------------------------
{"issue_description" : "**my description**", "reporter_email" : "**user#generic.com**"}
(1 row)
That way there's no way you will input invalid syntax (no hassle with quoting strings) and you can swap the values with parameters.

SQL Server 2014 - XQuery - get comma-separated List

I have a database table in SQL Server 2014 with only an ID column (int) and a column xmldata of type XML.
This xmldata column contains for example:
<book>
<title>a nice Novel</title>
<author>Maria</author>
<author>Peter</author>
</book>
As expected, I have multiple books, therefore multiple rows with xmldata.
I now want to execute a query for all books, where Peter is an Author. I tried this in some xPath2.0 testers and got to the conclusion that:
/book/author/concat(text(), if(position() != last())then ',' else '')
works.
If you try to port this success into SQL Server 2014 Express it looks like this, which is correctly escaped syntax etc.:
SELECT id
FROM books
WHERE 'Peter' IN (xmldata.query('/book/author/concat(text(), if(position() != last())then '','' else '''')'))
SQL Server however does not seem to support a construction like /concat(...) because of:
The XQuery syntax '/function()' is not supported.
I am at a loss then however, why /text() would work in:
SELECT id, xmldata.query('/book/author/text()')
FROM books
which it does.
My constraints:
I am bound to use SQL Server
I am bound to xpath or something else that can be "injected" as the statement above (if the structure of the xml or the database changes, the xpath above could be changed isolated and the application logic above that constructs the Where clause will not be touched) SEE EDIT
Is there a way to make this work?
regards,
BillDoor
EDIT:
My second constraint boils down to this:
An Application constructs the Where clause by
expression <operator> value(s)
expression is stored in a database and is mapped by the xmlTag eg.:
| tokenname| querystring
| "author" | "xmldata.query(/book/author/text())"
the values are presented by the Requesting user. so if the user asks for the author "Peter" with operator "EQUALS" the application constructs:
xmaldata.query(/book/author/text()) = "Peter"
as where clause.
If the customer now decides that author needs to be nested in an <authors> element, i can simply change the expression in the construction-database and the whole machine keeps running without any changes to code, simply manageable.
So i need a way to achieve that
<xPath> <operator> "Peter"
or any other combination of this three isolated components (see above: "Peter" IN <xPath>...) gets me all of Peters' books, even if there are multiple unsorted authors.
This would not suffice either (its not sqlserver syntax, but you get the idea):
WHERE xmldata.exist('/dossier/client[text() = "$1"]', "Peter") = 1;
because the operator is still nested in the expression, i could not request <> "Peter".
I know this is strange, please don't question the concept as a whole - it has a history :/
EDIT: further clarification:
The filter-rules come into the app in an XML structure basically:
Operator: "EQ"
field: "name"
value "Peter"
evaluates to:
expression = lookupExpressionForField("name") --> "table2.xmldata.value('book/author/name[1]', 'varchar')"
operator = lookUpOperatorMapping("EQ") --> "="
value = FormatValues("Peter") --> "Peter" (if multiple values are passed FormatValues cosntructs a comma seperated list)
the application then builds:
- constructClause(String expression,String operator,String value)
"table2.xmldata.value('book/author/name[1]', 'varchar')" + "=" + "Peter"
then constructs a Select statement with the result as WHERE clause.
it does not build it like this, unescaped, unfiltered for injection etc, but this is the basic idea.
i can influence how the input is Transalted, meaning I can implement the methods:
lookupExpressionForField(String field)
lookUpOperatorMapping(String operator)
Formatvalues(List<String> values) | Formatvalues(String value)
constructClause(String expression,String operator,String value)
however i choose to do, i can change the parameter types, I can freely implement them. The less the better of course. So simply constructing a comma-seperated list with xPath would be optimal (like if i could somewhere just tick "enable /function()-syntax in xPath" in sqlserver and the /concat(if...) would work)
How about something like this:
SET NOCOUNT ON;
DECLARE #Books TABLE (ID INT NOT NULL IDENTITY(1, 1) PRIMARY KEY, BookInfo XML);
INSERT INTO #Books (BookInfo)
VALUES (N'<book>
<title>a nice Novel</title>
<author>Maria</author>
<author>Peter</author>
</book>');
INSERT INTO #Books (BookInfo)
VALUES (N'<book>
<title>another one</title>
<author>Bob</author>
</book>');
SELECT *
FROM #Books bk
WHERE bk.BookInfo.exist('/book/author[text() = "Peter"]') = 1;
This returns only the first "book" entry. From there you can extract any portion of the XML field using the "value" function.
The "exist" function returns a boolean / BIT. This will scan through all "author" nodes within "book", so there is no need to concat into a comma-separated list only for use in an IN list, which wouldn't work anyway ;-).
For more info on the "value" and "exist" functions, as well as the other functions for use with XML data, please see:
xml Data Type Methods

How to retrieve multiple rows from stored procedure with Scala?

Say you have a stored procedure or function returning multiple rows, as discussed in How to return multiple rows from the stored procedure? (Oracle PL/SQL)
What would be a good way, using Scala, to "select * from table (all_emps);" (taken from URL above) and read the multiple rows of data that would be the result?
As far as I can see it is not possible to do this using Squeryl. Is there a scalaified tool like Squeryl that I can use, or do I have to drop to JDBC?
Functions that return tables are an Oracle specific feature, I doubt an ORM (be it Scala or even Java) would have support for such a proprietary extension.
So I think you're more or less on your own :).
Probably the easiest way is to use a plain JDBC java.sql.Statement and execute "select * from table (all_emps)" with the executeQuery method.
To address the second part of your question about a way to select from table in a more scala-esque way, I am using Slick. Quoting from their example documentation:
case class Coffee(name: String, supID: Int, price: Double)
implicit val getCoffeeResult = GetResult(r => Coffee(r.<<, r.<<, r.<<))
Database.forURL("...") withSession {
Seq(
Coffee("Colombian", 101, 7.99),
Coffee("Colombian_Decaf", 101, 8.99),
Coffee("French_Roast_Decaf", 49, 9.99)
).foreach(c => sqlu"""
insert into coffees values (${c.name}, ${c.supID}, ${c.price})
""").execute)
val sup = 101
val q = sql"select * from coffees where sup_id = $sup".as[Coffee]
// A bind variable to prevent SQL injection ^
q.foreach(println)
}
Though I am not sure how it's dealing (if at all) with stored procs/functions.

Resources