Limit size of each array value in Postgressql - arrays

CREATE TABLE TEST(id int, description varchar(100));
INSERT INTO TEST VALUES (1, 'The quick brown fox'),
(1, 'This is a test to check for data'),
(1, 'This is just another test checking data'),
(2, 'Data set 2'),
(2, 'This is a test for data set 2'),
(2, 'Quickest fox catches the worms')
I have a query where I'm using the array_agg function to put all descriptions on one field. Due to size limits I'm trying to only bring back the first 3 characters of each description.
select id, array_agg(id||', ') as ids,
array_agg(description||', ') as description
from test
group by id
I was trying to use the length function but I don't see how to limit each value in the array.

Related

Using loop to insert array type of data in PostgreSQL

I'd like to insert array data equip to a table using loop.
var svr = 1;
var equip = [3, 4, 5];
For that I need to insert the data three times.
Looks like this:
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, 3);
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, 4);
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, 5);
Can someone please get me out of this rabbit hole? Thanks
You can try unnest:
INSERT INTO khnp.link_server_equipment_map(svr, equip)
VALUES (1, UNNEST(ARRAY[3, 4, 5]));`
You can use the INSERT statement to insert several rows.
INSERT INTO table_name (column_list)
VALUES
(value_list_1),
(value_list_2),
...
(value_list_n);
According to your mentioned data example the rows insertion would be done this way
INSERT INTO khnp.link_server_equipment_map(svr, equip) VALUES
(1, 3),
(1, 4),
(1, 5);
Also to avoid adding the array content one by one you can use the UNNEST array function.

Max value in one table does not match value in another

I reviewed many answers about pulling a max value with a corresponding value from another column in the same table, but not when the corresponding column lives in another table.
Consider:
I'd like to pull one row for the max bucket_id (62715659), with it's corresponding payor_name value (HSN). The payor_name, however, lives in another table.
Like so:
Instead, when I run this query:
select
hsp_account_id
,bucket_id
,epm.payor_name
from hsp_bucket bkt
left join clarity_epm epm on bkt.payor_id = epm.payor_id
where bucket_id in (select max(bucket_id) from hsp_bucket)
I return 0 rows.
Here is some sample data from both tables:
CREATE TABLE hsp_bucket
(
hsp_account_id VarChar(50),
bucket_id NUMERIC(18,0),
payor_id NUMERIC(18,0)
);
INSERT INTO hsp_bucket
VALUES
('A', 10706486, NULL),
('A', 10706487, NULL),
('A', 10706488, NULL),
('A', 10706491, 1118),
('A', 10706489, 3004),
('A', 10706490, 4001),
('A', 62715659, 4001)
CREATE TABLE clarity_epm
(payor_id NUMERIC(18,0),
payor_name VarChar(50)
);
INSERT INTO clarity_epm
VALUES
(1118, 'BMCHP ALLI ACO'),
(3004, 'MEDICAID LIMITED'),
(4001, 'HSN')

Snowflake Javascript SP output as Table?

I'm writing one SP where output is expected as table. But not able to get output as in table format, but receiving it as an object a single value or all rows in one column while using array as return type.
create or replace table monthly_sales(empid int, amount int, month text)
as select * from values
(1, 10000, 'JAN'),
(1, 400, 'JAN'),
(2, 4500, 'JAN'),
(2, 35000, 'JAN'),
(1, 5000, 'FEB'),
(1, 3000, 'FEB'),
(2, 200, 'FEB'),
(2, 90500, 'FEB'),
(1, 6000, 'MAR'),
(1, 5000, 'MAR'),
(2, 2500, 'MAR'),
(2, 9500, 'MAR'),
(1, 8000, 'APR'),
(1, 10000, 'APR'),
(2, 800, 'APR'),
(2, 4500, 'APR'),
(2, 10000, 'MAY'),
(1, 800, 'MAY');
----------------------------------------------------------
select * from MONTHLY_SALES;
------------------------------------------------------------
create or replace procedure getRowCount(TABLENAME VARCHAR(1000))
returns variant
not null
language javascript
as
$$
// Dynamically compose the SQL statement to execute.
var sql_command = " SELECT * FROM "+TABLENAME+";"
// Prepare statement.
var stmt = snowflake.createStatement({sqlText: sql_command});
// Execute Statement
try
{
var rs = stmt.execute();
return rs;
}catch(err){return "error "+err;}
$$;
Call getRowCount('MONTHLY_SALES');
Expected Output:
Snowflake stored procedures can not have an output type of table. You have a few options. One option is writing a stored procedure that returns an array or JSON that you can flatten into a table. Note though, that you cannot use the return of a stored procedure directly. You'd have to first run the stored procedure, and as the very next statement executed in the session collect the output like this:
select * from table(result_scan(last_query_id()));
Another option is writing a user defined table function (UDTF), which is the only function type that returns a table in Snowflake. Here's an example of a simple UDTF:
create or replace function COUNT_LOW_HIGH(LowerBound double, UpperBound double)
returns table (MY_NUMBER double)
LANGUAGE JAVASCRIPT
AS
$$
{
processRow: function get_params(row, rowWriter, context){
for (var i = row.LOWERBOUND; i <= row.UPPERBOUND; i++) {
rowWriter.writeRow({MY_NUMBER: i});
}
}
}
$$;
You can then call the UDTF using the TABLE function like this:
SELECT * FROM TABLE(COUNT_LOW_HIGH(1::double, 1000::double));

How to insert multiple rows in a pgsql table, if the table is empty?

I'm developing a Building Block for Blackboard, and have run into a database related issue.
I'm trying to insert four rows into a pgsql table, but only if the table is empty. The query runs as a post-schema update, and is therefore run whenever I re-install the building block. It is vital that I do not simply drop exsisting values and/or replace them (which would be a simple and effective solution otherwise).
Below is my existing query, that does the job, but only for one row. As I mentioned, I'm trying to insert four rows. I can't simply run the insert multiple times, as after the first run, the table would no longer be empty.
Any help will be appriciated.
BEGIN;
INSERT INTO my_table_name
SELECT
nextval('my_table_name_SEQ'),
'Some website URL',
'Some image URL',
'Some website name',
'Y',
'Y'
WHERE
NOT EXISTS (
SELECT * FROM my_table_name
);
COMMIT;
END;
I managed to fix the issue.
In this post, #a_horse_with_no_name suggest using UNION ALL to solve a similar issue.
Also thanks to #Dan for suggesting using COUNT, rather than EXISTS
My final query:
BEGIN;
INSERT INTO my_table (pk1, coll1, coll2, coll3, coll4, coll5)
SELECT x.pk1, x.coll1, x.coll2, x.coll3, x.coll4, x.coll5
FROM (
SELECT
nextval('my_table_SEQ') as pk1,
'Some website URL' as coll1,
'Some image URL' as coll2,
'Some website name' as coll3,
'Y' as coll4,
'Y' as coll5
UNION
SELECT
nextval('my_table_SEQ'),
'Some other website URL',
'Some other image URL',
'Some other website name',
'Y',
'N'
UNION
SELECT
nextval('my_table_SEQ'),
'Some other other website URL',
'Some other other image URL',
'Some other other website name',
'Y',
'N'
UNION
SELECT
nextval('my_table_SEQ'),
'Some other other other website URL',
'Some other other other image URL',
'Some other other other website name',
'Y',
'Y'
) as x
WHERE
(SELECT COUNT(*) FROM my_table) <= 0;
COMMIT;
END;
It is better if you count the rows because it gets the number of input rows.
This should work:
BEGIN;
INSERT INTO my_table_name
SELECT
nextval('my_table_name_SEQ'),
'Some website URL',
'Some image URL',
'Some website name',
'Y',
'Y'
WHERE
(SELECT COUNT(*) FROM my_table_name)>0
COMMIT;
END;
Inserts won't overwrite, so I'm not understanding that part of your question.
Below are two ways to insert multiple rows; the second example is a single sql statement:
create table test (col1 int,
col2 varchar(10)
) ;
insert into test select 1, 'A' ;
insert into test select 2, 'B' ;
insert into test (col1, col2)
values (3, 'C'),
(4, 'D'),
(5, 'E') ;
select * from test ;
1 "A"
2 "B"
3 "C"
4 "D"
5 "E"

SQL Server DB Project Publish Empty string inserting a Zero

We are seeing a very strange problem when populating a field in a DB via a SQL Server DB project publish action.
This is the table definition
CREATE TABLE [dbo].[CatalogueItemExtensionFields]
(
[RowID] tinyint identity not null,
[FieldType] tinyint not null,
[Description] varchar(120) not null,
[Nullable] bit not null,
[DefaultValue] varchar(100) null,
[Active_Flag] bit null,
[OrderPriority] tinyint not null,
[ContextGuid] uniqueidentifier not null
);
This is the population script
set identity_insert CatalogueItemExtensionFields on
INSERT INTO CatalogueItemExtensionFields (rowid, fieldtype, description, nullable, defaultvalue, active_flag, orderpriority)
VALUES (dbo.ConstantProductGroupRowId(), 3, 'Product Group', 0, '', 1, dbo.ConstantProductGroupRowId()),
set identity_insert CatalogueItemExtensionFields off
If I run the INSERT script manually all works fine. When I run it as part of the DB project publish, it inserts "0".
I have looked at the publish.sql script that is generated, and all looks fine.
BTW, the only similar post I have found is this, but it does not apply to our case because the field we are inserting into is defined as varchar.
This is driving us mad. Any ideas?
We at our company finally found out that if you use SQLCMD / DBProj then it is super import to ENABLE Quoted Identifiers. Or else installer changes inputs in the exactly same way as #maurocam explained. If you enable this, then it works same as in SQL Management Studio for example.
To enable it:
SQLCMD
just use parameter -I (capital is important here, small is for file).
Example sqlcmd -S localhost -d DBNAME -U User -P Password -i path/to/sql/file.sql -I
SQL Itself
SET QUOTED_IDENTIFIER { ON | OFF }
DBProj
can be set at the project or object (proc, func, ...) level. Just click on a proj/file -> Properties and check if QUOTED_IDENTIFIER is enabled there.
For schema compare it can be set via "Ignore quoted identifiers"
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-quoted-identifier-transact-sql?view=sql-server-ver16
APOLOGIES - MY MISTAKE!!! (But a very useful one to document)
I have summarised again below (also to make it clearer respect to my initial post)
TABLE DEFINITION
CREATE TABLE [dbo].[CatalogueItemExtensionFields]
(
[RowID] tinyint identity not null,
[FieldType] tinyint not null,
[Description] varchar(120) not null,
[Nullable] bit not null,
[DefaultValue] varchar(100) null,
[Active_Flag] bit null,
[OrderPriority] tinyint not null
);
INSERT STATEMENT
set identity_insert CatalogueItemExtensionFields on
INSERT INTO CatalogueItemExtensionFields (rowid, fieldtype, description, nullable, defaultvalue, active_flag, orderpriority) VALUES
(6, 3, N'Product Group', 0, N'', 1, 6),
(7, 2, N'Minimum Order Quantity', 1, NULL, 1, 7),
(8, 3, N'Additional HIBCs', 0, 1, 1, 8),
(9, 3, N'Additional GTINs', 0, N'', 1, 9)
set identity_insert CatalogueItemExtensionFields off
Because I am inserting multiple rows, when SQL parses the statement it see I am trying to insert a numeric defaultvalue = 1 for RowID = 8. As a result, even though the column is defined as a varchar, SQL decides that the INSERT statement is inserting INTs. So the empty string values (for RowIDs 7 and 9) are converted to zero. I referred to a post I had found relating to actual INT column, which results in the same behaviour.
If I instead run the following statement, with a default value of '1' for RowID = 8, it all works fine.
INSERT INTO CatalogueItemExtensionFields (rowid, fieldtype, description, nullable, defaultvalue, active_flag, orderpriority) VALUES
(6, 3, N'Product Group', 0, N'', 1, 6),
(7, 2, N'Minimum Order Quantity', 1, NULL, 1, 7),
(8, 3, N'Additional HIBCs', 0, '1', 1, 8),
(9, 3, N'Additional GTINs', 0, N'', 1, 9)
So, now the question is, why does SQL server ignore the column type definition and instead decides the type from the value in my INSERT statement?
Answer from SqlServerCentral
"empty string converts to an int without an error, and it's value is zero. it has to do with the precidence of implicit conversions"

Resources