I'm writing one SP where output is expected as table. But not able to get output as in table format, but receiving it as an object a single value or all rows in one column while using array as return type.
create or replace table monthly_sales(empid int, amount int, month text)
as select * from values
(1, 10000, 'JAN'),
(1, 400, 'JAN'),
(2, 4500, 'JAN'),
(2, 35000, 'JAN'),
(1, 5000, 'FEB'),
(1, 3000, 'FEB'),
(2, 200, 'FEB'),
(2, 90500, 'FEB'),
(1, 6000, 'MAR'),
(1, 5000, 'MAR'),
(2, 2500, 'MAR'),
(2, 9500, 'MAR'),
(1, 8000, 'APR'),
(1, 10000, 'APR'),
(2, 800, 'APR'),
(2, 4500, 'APR'),
(2, 10000, 'MAY'),
(1, 800, 'MAY');
----------------------------------------------------------
select * from MONTHLY_SALES;
------------------------------------------------------------
create or replace procedure getRowCount(TABLENAME VARCHAR(1000))
returns variant
not null
language javascript
as
$$
// Dynamically compose the SQL statement to execute.
var sql_command = " SELECT * FROM "+TABLENAME+";"
// Prepare statement.
var stmt = snowflake.createStatement({sqlText: sql_command});
// Execute Statement
try
{
var rs = stmt.execute();
return rs;
}catch(err){return "error "+err;}
$$;
Call getRowCount('MONTHLY_SALES');
Expected Output:
Snowflake stored procedures can not have an output type of table. You have a few options. One option is writing a stored procedure that returns an array or JSON that you can flatten into a table. Note though, that you cannot use the return of a stored procedure directly. You'd have to first run the stored procedure, and as the very next statement executed in the session collect the output like this:
select * from table(result_scan(last_query_id()));
Another option is writing a user defined table function (UDTF), which is the only function type that returns a table in Snowflake. Here's an example of a simple UDTF:
create or replace function COUNT_LOW_HIGH(LowerBound double, UpperBound double)
returns table (MY_NUMBER double)
LANGUAGE JAVASCRIPT
AS
$$
{
processRow: function get_params(row, rowWriter, context){
for (var i = row.LOWERBOUND; i <= row.UPPERBOUND; i++) {
rowWriter.writeRow({MY_NUMBER: i});
}
}
}
$$;
You can then call the UDTF using the TABLE function like this:
SELECT * FROM TABLE(COUNT_LOW_HIGH(1::double, 1000::double));
Related
I'd like to insert array data equip to a table using loop.
var svr = 1;
var equip = [3, 4, 5];
For that I need to insert the data three times.
Looks like this:
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, 3);
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, 4);
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, 5);
Can someone please get me out of this rabbit hole? Thanks
You can try unnest:
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, UNNEST(ARRAY[3, 4, 5]));`
You can use the INSERT statement to insert several rows.
INSERT INTO table_name (column_list)
VALUES
(value_list_1),
(value_list_2),
...
(value_list_n);
According to your mentioned data example the rows insertion would be done this way
INSERT INTO khnp.link_server_equipment_map(svr, equip) VALUES
(1, 3),
(1, 4),
(1, 5);
Also to avoid adding the array content one by one you can use the UNNEST array function.
I reviewed many answers about pulling a max value with a corresponding value from another column in the same table, but not when the corresponding column lives in another table.
Consider:
I'd like to pull one row for the max bucket_id (62715659), with it's corresponding payor_name value (HSN). The payor_name, however, lives in another table.
Like so:
Instead, when I run this query:
select
hsp_account_id
,bucket_id
,epm.payor_name
from hsp_bucket bkt
left join clarity_epm epm on bkt.payor_id = epm.payor_id
where bucket_id in (select max(bucket_id) from hsp_bucket)
I return 0 rows.
Here is some sample data from both tables:
CREATE TABLE hsp_bucket
(
hsp_account_id VarChar(50),
bucket_id NUMERIC(18,0),
payor_id NUMERIC(18,0)
);
INSERT INTO hsp_bucket
VALUES
('A', 10706486, NULL),
('A', 10706487, NULL),
('A', 10706488, NULL),
('A', 10706491, 1118),
('A', 10706489, 3004),
('A', 10706490, 4001),
('A', 62715659, 4001)
CREATE TABLE clarity_epm
(payor_id NUMERIC(18,0),
payor_name VarChar(50)
);
INSERT INTO clarity_epm
VALUES
(1118, 'BMCHP ALLI ACO'),
(3004, 'MEDICAID LIMITED'),
(4001, 'HSN')
CREATE TABLE TEST(id int, description varchar(100));
INSERT INTO TEST VALUES (1, 'The quick brown fox'),
(1, 'This is a test to check for data'),
(1, 'This is just another test checking data'),
(2, 'Data set 2'),
(2, 'This is a test for data set 2'),
(2, 'Quickest fox catches the worms')
I have a query where I'm using the array_agg function to put all descriptions on one field. Due to size limits I'm trying to only bring back the first 3 characters of each description.
select id, array_agg(id||', ') as ids,
array_agg(description||', ') as description
from test
group by id
I was trying to use the length function but I don't see how to limit each value in the array.
Running on SQL Server 2016.
I have a routine that updates information across servers. I want to hold a list of any changes that I have been required to make. I am trying to output the changed columns as XML for basic storage, and would like to do this directly from the OUTPUT generated by the insert/update/delete if possible.
As an example:
DROP TABLE IF EXISTS Test
CREATE TABLE Test (myKey INT, myValue INT)
INSERT INTO dbo.Test (myKey, myValue)
VALUES (1, 1), (2, 2), (3, 3)
UPDATE dbo.Test
SET myValue = myValue + 10
OUTPUT Deleted.*
, Inserted.*
WHERE myKey < 3
SELECT *
FROM dbo.Test
FOR XML AUTO
DROP TABLE dbo.Test
I know I can set up a TVP to receive the output and then convert to XML from there, but it seems like I'm taking extra steps to do something that should be quite straight forward.
DROP TABLE IF EXISTS Test
CREATE TABLE Test (myKey INT, myValue INT)
INSERT INTO dbo.Test (myKey, myValue)
VALUES (1, 1), (2, 2), (3, 3)
DECLARE #OutputValues AS TABLE(dMyKey INT, dMyValue INT, iMyKey INT, iMyValue INT)
UPDATE dbo.Test
SET myValue = myValue + 10
OUTPUT Deleted.myKey
, Deleted.myValue
, Inserted.myKey
, Inserted.myValue
INTO #OutputValues
WHERE myKey < 3
SELECT *
FROM #OutputValues
FOR XML AUTO
DROP TABLE dbo.Test
While this second piece of code does achieve the sort of output I am looking for, going via a TVP seems to be a bit wasteful.
If I can format the output from the original code directly as XML I feel that would be a better solution. However, I can't see any way of doing so.
Many Thanks.
What are you trying to achieve? Such monitoring / auditing tasks are most likely better solved within a trigger...
The output clause does not allow sub-selects... The only way - not generic and rather ugly - which came into my mind was this (the clue is the implicit cast from nvarchar to xml):
CREATE TABLE Test (myKey INT, myValue INT)
INSERT INTO dbo.Test (myKey, myValue)
VALUES (1, 1), (2, 2), (3, 3)
DECLARE #OutputValues AS TABLE(dMyKey INT, dMyValue INT, iMyKey INT, iMyValue INT,Changed XML)
UPDATE dbo.Test
SET myValue = myValue + 10
OUTPUT Deleted.myKey
, Deleted.myValue
, Inserted.myKey
, Inserted.myValue
, N'<root><deletedKey>' + CAST(deleted.myKey AS NVARCHAR(MAX)) + N'</deletedKey>' +
N'<deletedValue>' + CAST(deleted.myValue AS NVARCHAR(MAX)) + N'</deletedValue>' +
N'<insertedKey>' + CAST(inserted.myValue AS NVARCHAR(MAX)) + N'</insertedKey>' +
N'<insertedValue>' + CAST(inserted.myValue AS NVARCHAR(MAX)) + N'</insertedValue>' +
N'</root>'
INTO #OutputValues
WHERE myKey < 3
SELECT Changed
FROM #OutputValues
DROP TABLE dbo.Test
attention: If your values include forbidden characters (such as <, > or &) this will fail!
We are seeing a very strange problem when populating a field in a DB via a SQL Server DB project publish action.
This is the table definition
CREATE TABLE [dbo].[CatalogueItemExtensionFields]
(
[RowID] tinyint identity not null,
[FieldType] tinyint not null,
[Description] varchar(120) not null,
[Nullable] bit not null,
[DefaultValue] varchar(100) null,
[Active_Flag] bit null,
[OrderPriority] tinyint not null,
[ContextGuid] uniqueidentifier not null
);
This is the population script
set identity_insert CatalogueItemExtensionFields on
INSERT INTO CatalogueItemExtensionFields (rowid, fieldtype, description, nullable, defaultvalue, active_flag, orderpriority)
VALUES (dbo.ConstantProductGroupRowId(), 3, 'Product Group', 0, '', 1, dbo.ConstantProductGroupRowId()),
set identity_insert CatalogueItemExtensionFields off
If I run the INSERT script manually all works fine. When I run it as part of the DB project publish, it inserts "0".
I have looked at the publish.sql script that is generated, and all looks fine.
BTW, the only similar post I have found is this, but it does not apply to our case because the field we are inserting into is defined as varchar.
This is driving us mad. Any ideas?
We at our company finally found out that if you use SQLCMD / DBProj then it is super import to ENABLE Quoted Identifiers. Or else installer changes inputs in the exactly same way as #maurocam explained. If you enable this, then it works same as in SQL Management Studio for example.
To enable it:
SQLCMD
just use parameter -I (capital is important here, small is for file).
Example sqlcmd -S localhost -d DBNAME -U User -P Password -i path/to/sql/file.sql -I
SQL Itself
SET QUOTED_IDENTIFIER { ON | OFF }
DBProj
can be set at the project or object (proc, func, ...) level. Just click on a proj/file -> Properties and check if QUOTED_IDENTIFIER is enabled there.
For schema compare it can be set via "Ignore quoted identifiers"
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-quoted-identifier-transact-sql?view=sql-server-ver16
APOLOGIES - MY MISTAKE!!! (But a very useful one to document)
I have summarised again below (also to make it clearer respect to my initial post)
TABLE DEFINITION
CREATE TABLE [dbo].[CatalogueItemExtensionFields]
(
[RowID] tinyint identity not null,
[FieldType] tinyint not null,
[Description] varchar(120) not null,
[Nullable] bit not null,
[DefaultValue] varchar(100) null,
[Active_Flag] bit null,
[OrderPriority] tinyint not null
);
INSERT STATEMENT
set identity_insert CatalogueItemExtensionFields on
INSERT INTO CatalogueItemExtensionFields (rowid, fieldtype, description, nullable, defaultvalue, active_flag, orderpriority) VALUES
(6, 3, N'Product Group', 0, N'', 1, 6),
(7, 2, N'Minimum Order Quantity', 1, NULL, 1, 7),
(8, 3, N'Additional HIBCs', 0, 1, 1, 8),
(9, 3, N'Additional GTINs', 0, N'', 1, 9)
set identity_insert CatalogueItemExtensionFields off
Because I am inserting multiple rows, when SQL parses the statement it see I am trying to insert a numeric defaultvalue = 1 for RowID = 8. As a result, even though the column is defined as a varchar, SQL decides that the INSERT statement is inserting INTs. So the empty string values (for RowIDs 7 and 9) are converted to zero. I referred to a post I had found relating to actual INT column, which results in the same behaviour.
If I instead run the following statement, with a default value of '1' for RowID = 8, it all works fine.
INSERT INTO CatalogueItemExtensionFields (rowid, fieldtype, description, nullable, defaultvalue, active_flag, orderpriority) VALUES
(6, 3, N'Product Group', 0, N'', 1, 6),
(7, 2, N'Minimum Order Quantity', 1, NULL, 1, 7),
(8, 3, N'Additional HIBCs', 0, '1', 1, 8),
(9, 3, N'Additional GTINs', 0, N'', 1, 9)
So, now the question is, why does SQL server ignore the column type definition and instead decides the type from the value in my INSERT statement?
Answer from SqlServerCentral
"empty string converts to an int without an error, and it's value is zero. it has to do with the precidence of implicit conversions"