I am trying to creae a list of json object by processing the result set from a different query execution. The liast of json I want to persist to another table and copy the json to a stage so that later i can run the copy command and json data gets copied to the other table. How can I acheive this? Any thoughts.? The code is shared as screen shot in the image attached.
Code
In case you are only looking to persist/insert the resultset getting generated within your procedure then you can try to incorporate something like following in your code -
CREATE OR REPLACE procedure stproc1(anyvar varchar)
RETURNS varchar not null
LANGUAGE javascript strict
AS
$$
var rowarr = [];
var rowobj = {};
// BUILDING A DUMMY ARRAY
rowobj['processid'] = 100;
rowobj['loc'] = 200;
rowobj['item'] = 300;
rowarr.push(rowobj);
// ENSURE TO nullify object BEFORE ANOTHER PUSH
rowobj={};
rowobj['processid'] = 10;
rowobj['loc'] = 20;
rowobj['item'] = 30;
rowarr.push(rowobj);
len = rowarr.length;
var querystr = "";
var ret_str = "";
//Iterate though ARRAY to INSERT into target TABLE
for (var x = 0; x < len; x++) {
querystr = "insert into item_j values (?)";
var statement = snowflake.createStatement({sqlText:querystr, binds:[JSON.stringify(rowarr[x])] });
var rs = statement.execute();
}
return "BATCH";
$$
;
After executing above result will be stored in table ITEM_J that has only one column of type VARCHAR.
select * from item_j;
+-------+
| T_VAL |
|-------|
+-------+
call stproc1('a');
+---------+
| STPROC1 |
|---------|
| BATCH |
+---------+
select * from item_j;
+----------------------------------------+
| T_VAL |
|----------------------------------------|
| {"processid":100,"loc":200,"item":300} |
| {"processid":10,"loc":20,"item":30} |
+----------------------------------------+
2 Row(s) produced.
Related
Suppose I have a table PRODUCTS with many columns, and that I want to insert/update a row using a MERGE statement. It is something along these lines:
MERGE INTO PRODUCTS AS Target
USING (VALUES(42, 'Foo', 'Bar', 0, 14, 200, NULL)) AS Source (ID, Name, Description, IsSpecialPrice, CategoryID, Price, SomeOtherField)
ON Target.ID = Source.ID
WHEN MATCHED THEN
-- update
WHEN NOT MATCHED BY TARGET THEN
-- insert
To write the UPDATE and INSERT "sub-statements" it seems I have to specify once again each and every column field. So -- update would be replaced by
UPDATE SET ID = Source.ID, Name = Source.Name, Description = Source.Description...
and -- insert by
INSERT (ID, Name, Description...) VALUES (Source.ID, Source.Name, Source.Description...)
This is very error-prone, hard to maintain, and apparently not really needed in the simple case where I just want to merge two "field sets" each representing a full table row. I appreciate that the update and insert statements could actually be anything (I've already used this in an unusual case in the past), but it would be great if there was a more concise way to represent the case where I just want "Target = Source" or "insert Source".
Does a better way to write the update and insert statements exist, or do I really need to specify the full column list every time?
You have to write the complete column lists.
You can check the documentation for MERGE here. Most SQL Server statement documentation starts with a syntax definition that shows you exactly what is allowed. For instance, the section for UPDATE is defined as:
<merge_matched>::=
{ UPDATE SET <set_clause> | DELETE }
<set_clause>::=
SET
{ column_name = { expression | DEFAULT | NULL }
| { udt_column_name.{ { property_name = expression
| field_name = expression }
| method_name ( argument [ ,...n ] ) }
}
| column_name { .WRITE ( expression , #Offset , #Length ) }
| #variable = expression
| #variable = column = expression
| column_name { += | -= | *= | /= | %= | &= | ^= | |= } expression
| #variable { += | -= | *= | /= | %= | &= | ^= | |= } expression
| #variable = column { += | -= | *= | /= | %= | &= | ^= | |= } expression
} [ ,...n ]
As you can see, the only options in <set clause> are individual columns/assignments. There is no "bulk" assignment option. Lower down in the documentation you'll find the options for INSERT also requires individual expressions (at least, in the VALUES clause - you can omit the column names after the INSERT but that's generally frowned upon).
SQL tends to favour verbose, explicit syntax.
I'm working on data cleansing of a database and I'm currently in the process of changing the upper case names into proper case. Hence, I'm using excel to have an update statement like this:
A | B | C | D |
| 1 | Name | id | Proper case name| SQL Statement |
|-----|------|-----|-----------------|---------------|
| 2 | AAAA | 1 |Aaaa |=CONCAT("UPDATE table SET Name = "'",C2,"'" WHERE id = ",B2,";") |
|-----|------|-----|-----------------|---------------|
| 3 | BBBB | 2 |Bbbb |=CONCAT("UPDATE table SET Name = "'",C3,"'" WHERE id = ",B3,";")|
The SQL state should be something like this:
UPDATE table SET Name = 'Aaaa' WHERE id = 1
UPDATE table SET Name = 'Bbbb' WHERE id = 2
I'm finding it difficult to get apostrophe around the name.
I think you need:
=CONCATENATE("UPDATE table SET Name = '",C2,"' WHERE id = ",B2,";")
I have an json object containing an array and others properties.
I need to check the first value of the array for each line of my table.
Here is an example of the json
{"objectID2":342,"objectID1":46,"objectType":["Demand","Entity"]}
So I need for example to get all lines with ObjectType[0] = 'Demand' and objectId1 = 46.
This the the table colums
id | relationName | content
Content column contains the json.
just query them? like:
t=# with table_name(id, rn, content) as (values(1,null,'{"objectID2":342,"objectID1":46,"objectType":["Demand","Entity"]}'::json))
select * From table_name
where content->'objectType'->>0 = 'Demand' and content->>'objectID1' = '46';
id | rn | content
----+----+-------------------------------------------------------------------
1 | | {"objectID2":342,"objectID1":46,"objectType":["Demand","Entity"]}
(1 row)
I have table like below
table
CREATE TABLE IF NOT EXISTS "Article"(
"ArticleId" SERIAL NOT NULL,
"GenresIdList" integer[],
...
PRIMARY KEY ("ArticleId")
);
CREATE TABLE IF NOT EXISTS "Tag0"(
"TagId" SERIAL NOT NULL,
"Name" varchar,
...
PRIMARY KEY ("TagId")
);
ArticleId | GenresIdList
1 | {1} |
2 | {1} |
3 | {1,2} |
4 | {1,2,3} |
TagId | Name
1 | hiphop
2 | rock
When user input data inputGenres I want get below result:
if inputGenres = ['hiphop','rock','classical']; then will get no rows in Article
if inputGenres = ['hiphop','rock']; get Article rows 3 and 4
but because I select two table separate then even I use && in select article table when inputGenres = ['hiphop','rock','classical']; when convert to id array I will become [1,2] because there is no classical, then I will get rows 3 and 4.
How to solve this?
ps. I have to design table like this, only store id not store name in 'Article'. so I hope not redesign table
code (with nodejs)
// convert inputGenres to tag0TagIdList
var tag0TagIdList = [];
var db = dbClient;
var query = 'SELECT * FROM "Tag0" WHERE "Name" IN (';
for (var i = 0; i < inputGenres.length; i++) {
if (i > 0) {
query += ',';
}
query += '$' + (i + 1);
}
query += ') ORDER BY "Name" ASC';
var params = inputGenres;
var selectTag0 = yield crudDatabase(db,query,params);
for (var i = 0; i < selectTag0.result.rows.length; i++) {
tag0TagIdList.push(selectTag0.result.rows[i].TagId);
}
// end: convert inputGenres to tag0TagIdList
var db = dbClient;
var query = 'SELECT * FROM "Article" WHERE "GenresIdList" && $1';
var params = [tag0TagIdList];
var selectArticle = yield crudDatabase(db,query,params);
var tag0TagIdList = [];
var db = dbClient;
var query = 'select * from "Article" where "GenresIdList" #> (select array_agg ("TagId") from unnest (array[';
for (var i = 0; i < inputGenres.length; i++) {
if (i > 0) {
query += ',';
}
query += '$' + (i + 1);
}
query += ']) as input_tags left join "Tag0" on ( "Name" = input_tags))';
I don't know java much, but this should return what you want.
query example:
SELECT * FROM "Article"
WHERE
"GenresIdList" #> (
SELECT
array_agg ( "TagId" )
FROM
unnest (ARRAY [ 'hiphop', 'rock' ] ) AS input_tags
LEFT JOIN "Tag0" ON (
"Name" = input_tags ) )
I have a flat file that has 6 columns: NoteID, Sequence, FileNumber, EntryDte, NoteType, and NoteText. The NoteText column has 200 characters and if a note is longer than 200 characters then a second row in the file contains the continuation of the note. It looks something like this:
|NoteID | Sequence | NoteText |
---------------------------------------------
|1234 | 1 | start of note text... |
|1234 | 2 | continue of note.... |
|1234 | 3 | more continuation of first note... |
|1235 | 1 | start of new note.... |
How can I in SSIS combine the multiple rows of NoteText into one row so the row would like this:
| NoteID | Sequence | NoteText |
---------------------------------------------------
|1234 | 1 | start of note text... continue of note... more continuation of first note... |
|1235 | 1 | start of new note.... |
Greatly appreciate any help?
Update: Changing the SynchronousInputID to None exposed the Output0Buffer and I was able to use it. Below is what I have in place now.
Dim NoteID As String = "-1"
Dim NoteString As String = ""
Dim IsFirstRow As Boolean = True
Dim NoteBlob As Byte()
Dim enc As New System.Text.ASCIIEncoding()
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
If Row.NoteID.ToString() = NoteID Then
NoteString += Row.NoteHTML
IsFirstRow = True
Else
If IsFirstRow Then
Output0Buffer.AddRow()
IsFirstRow = False
End If
NoteID = Row.NoteID.ToString()
NoteString = Row.NoteHTML.ToString()
End If
NoteBlob = enc.GetBytes(NoteString)
Output0Buffer.SingleNoteHTML.AddBlobData(NoteBlob)
Output0Buffer.ClaimID = Row.ClaimID
Output0Buffer.UserID = Row.UserID
Output0Buffer.NoteTypeLookupID = Row.NoteTypeLookupID
Output0Buffer.DateCreatedUTC = Row.DateCreated
Output0Buffer.ActivityDateUTC = Row.ActivityDate
Output0Buffer.IsPublic = Row.IsPublic
End Sub
My problem now is that I had to convert the output column from Wstr(4000) to NText because some of the notes are so long. When it imports into my SQL table, it is just jibberish characters and not the actual notes.
In SQL Server Management Studio (using SQL), you could easily combine your NoteText field using stuff function with XML Path to combine your row values to a single column like this:
select distinct
noteid,
min(sequence) over (partition by n.noteid order by n.sequence) as sequence,
stuff((select ' ' + NoteText
from notes n1
where n.noteid = n1.noteid
for xml path ('')
),1,1,'') as NoteText
from notes n;
You will probably want to look into something along the line that does similar thing in SSIS. Check out this link on how to create a script component in SSIS to do something similar: SSIS Script Component - concat rows
SQL Fiddle Demo