Snowlfake JSON Key value changes, SQL required - snowflake-cloud-data-platform

How can I change the keys to other names? From country to COUNTRY
ID RECORD 2 { "country": "England", "id": "100200", "status": "morestatus" } 3 { "country": "AMERICA", "id": "100300", "status": "morestatus" } 1 { "country": "UK", "id": "100100", "status": "somestatus" } ID RECORD 2 { "COUNTRY": "England", "id": "100200", "status": "morestatus" } 3 { "COUNTRY": "AMERICA", "id": "100300", "status": "morestatus" } 1 { "COUNTRY": "UK", "id": "100100", "status": "somestatus" }
I tried this but it seems this works for the values of "country" and can't change the country to COUNTRY
UPDATE "KAFKA_DB"."KAFKA_SCHEMA"."TARGET" T SET T.RECORD =OBJECT_INSERT(T.RECORD:'country','COUNTRY', TRUE) WHERE RECORD:"country" = 'country';

You can nest OBJECT_DELETE and OBJECT_INSERT to add a key with the old value and delete the key.
create temp table t1 as select parse_json('{ "country": "England", "id": "100200", "status": "morestatus" }') as V;
select * from t1;
select object_delete(object_insert(v, 'COUNTRY', v:country), 'country') from t1;
To persist the change, just do an UPDATE:
update t1 set v = object_delete(object_insert(v, 'COUNTRY', v:country), 'country');

The arguments passed to object_insert need to be corrected. The update flag will have no effect here since JSON has case-sensitive keys.
Try
UPDATE "KAFKA_DB"."KAFKA_SCHEMA"."TARGET" T SET T.RECORD = OBJECT_INSERT(T.RECORD,'COUNTRY', T.RECORD:country);
WHERE RECORD:"country" = 'country'; is also limiting your updates to rows where the country key has value country which is probably not what you want.

Related

How can I insert contract with ShippingAddress?

I want to register a new Contract Object.
However, when I try to register it together with ShippingAddress, I get an error.
How do I add a ShippingAddress with a Contract Object?
■ Field is not writeable: Contract.ShippingAddres
Contract cont = (Contract) parser.readValueAsStrict(Contract.class);
Address a = new Address();
a.country = 'Japan';
a.city = 'Tokyo';
cont.ShippingAddress = a;
insert cont;
I ran New Contract in the GUI, and the second of the arrays has ShippingAddress registered.
I would like to do this with the Apex API as well.
■ The ShippingAddress of the second record is registered.
[
{
"attributes": {
"type": "Contract",
"url": "/services/data/v56.0/sobjects/Contract/8000T000000223IQAQ"
},
"Id": "8000T000000223IQAQ",
"AccountId": "0010T00000Ox7sSQAR",
"BillingAddress": null,
"ShippingAddress": null,
"OwnerId": "0055g00000GRB5vAAH",
"Status": "Draft",
"StatusCode": "Draft",
"IsDeleted": false,
"ContractNumber": "00000114",
"CreatedDate": "2022-11-15T10:03:42.000+0000",
"CreatedById": "0055g00000GRB5vAAH",
"LastModifiedDate": "2022-11-15T10:03:42.000+0000",
"LastModifiedById": "0055g00000GRB5vAAH",
"SystemModstamp": "2022-11-15T10:03:42.000+0000",
"LastViewedDate": "2022-11-15T10:03:42.000+0000",
"LastReferencedDate": "2022-11-15T10:03:42.000+0000"
},
{
"BillingStreet": "西新宿1丁目",
"BillingCity": "新宿区",
"BillingState": "東京都",
"BillingPostalCode": "163-0590",
"BillingCountry": "日本",
"BillingAddress": {
"city": "新宿区",
"country": "日本",
"geocodeAccuracy": null,
"latitude": null,
"longitude": null,
"postalCode": "163-0590",
"state": "東京都",
"street": "西新宿1丁目"
},
"ShippingStreet": "西新宿1丁目",
"ShippingCity": "新宿区",
"ShippingState": "東京都",
"ShippingPostalCode": "163-0590",
"ShippingCountry": "日本",
"ShippingAddress": {
"city": "新宿区",
"country": "日本",
"geocodeAccuracy": null,
"latitude": null,
"longitude": null,
"postalCode": "163-0590",
"state": "東京都",
"street": "西新宿1丁目"
},
.....
}
]
https://developer.salesforce.com/docs/atlas.en-us.238.0.object_reference.meta/object_reference/compound_fields_address.htm
Standard address compound fields are read-only, and are only
accessible using the SOAP and REST APIs. See Compound Field
Considerations and Limitations for additional details of the
restrictions this imposes.
Instead set single fields inside the address.
cont.ShippingCountry = 'Japan';
cont.ShippingCity = 'Tokyo';
insert cont;

PostgreSQL jsonb_set multiple elements in array

I have following jsonb structure in column recipients in a table called mailing:
[
{
"text": "Text1",
"smsId": 1,
"value": "123456",
"status": "Sent"
},
{
"text": "Text1",
"smsId": 2,
"value": "23456",
"status": "Sent"
},
{
"text": "Text1",
"smsId": 3,
"value": "345678",
"status": "Sent"
}]
I need to update one field in multiple elements, so the outcome should look like this:
[
{
"text": "Text1",
"smsId": 1,
"value": "123456",
"status": "Delivered"
},
{
"text": "Text1",
"smsId": 2,
"value": "23456",
"status": "Delivered"
},
{
"text": "Text1",
"smsId": 3,
"value": "345678",
"status": "Delivered"
}]
The most close I got to solution is this:
WITH item AS (SELECT mailing_id, ('{' || INDEX-1 || ',status}')::text[] AS PATH
FROM mailing, jsonb_array_elements(recipients) WITH ORDINALITY arr(recipient, INDEX)
WHERE recipient->>'smsId' = any(array['1', '2', '3']))
UPDATE mailing m
SET recipients = jsonb_set(recipients, item.path, '"Delivered"',FALSE)
FROM item
WHERE m.mailing_id = item.mailing_id;
But this solution updates only first row, and I am not sure if I should somehow loop this or try different approach?
You need to aggregate modified array elements with jsonb_agg():
with new_data as (
select
mailing_id,
jsonb_agg(
case when value->>'smsId' = any('{1,2,3}') then value || '{"status": "Delivered"}'
else value
end) as recipients
from mailing
cross join jsonb_array_elements(recipients)
group by mailing_id
)
update mailing m
set recipients = n.recipients
from new_data n
where m.mailing_id = n.mailing_id;
Test it in db<>fidlle.

I'm attempting to parse json data from zendesk using v: structure

With standard fields, like id, this works perfectly. But I am not finding a way to parse the custom fields where the structure is
"custom_fields": [
{
"id": 57852188,
"value": ""
},
{
"id": 57522467,
"value": ""
},
{
"id": 57522487,
"value": ""
}
]
The general format that I have been using is:
Select v:id,v:updatedat
from zd_tickets
updated data:
{
"id":151693,
"brand_id": 36000,
"created_at": "2022-0523T19:26:35Z",
"custom_fields": [
{ "id": 57866008, "value": false },
{ "id": 360022282754, "value": "" },
{ "id": 80814087, "value": "NC" } ],
"group_id": 36000770
}
If you want to select all repeating elements you will need to use FLATTEN, otherwise you can use standard notation. This is all documented here: https://docs.snowflake.com/en/user-guide/querying-semistructured.html#retrieving-a-single-instance-of-a-repeating-element
So using this CTE to access the data in a way that look like a table:
with data(json) as (
select parse_json(column1) from values
('{
"id":151693,
"brand_id": 36000,
"created_at": "2022-0523T19:26:35Z",
"custom_fields": [
{ "id": 57866008, "value": false },
{ "id": 360022282754, "value": "" },
{ "id": 80814087, "value": "NC" } ],
"group_id": 36000770
} ')
)
SQL to unpack the top level items, as you have shown you have working:
select
json:id::number as id
,json:brand_id::number as brand_id
,try_to_timestamp(json:created_at::text, 'yyyy-mmddThh:mi:ssZ') as created_at
,json:custom_fields as custom_fields
from data;
gives:
ID
BRAND_ID
CREATED_AT
CUSTOM_FIELDS
151693
36000
2022-05-23 19:26:35.000
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
So now how to tackle that json/array of custom_fields..
Well if you only ever have 3 values, and the order is always the same..
select
to_array(json:custom_fields) as custom_fields_a
,custom_fields_a[0] as field_0
,custom_fields_a[1] as field_1
,custom_fields_a[2] as field_2
from data;
gives:
CUSTOM_FIELDS_A
FIELD_0
FIELD_1
FIELD_2
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
{ "id": 57866008, "value": false }
{ "id": 360022282754, "value": "" }
{ "id": 80814087, "value": "NC" }
so we can use flatten to access those objects, which makes "more rows"
select
d.json:id::number as id
,d.json:brand_id::number as brand_id
,try_to_timestamp(d.json:created_at::text, 'yyyy-mmddThh:mi:ssZ') as created_at
,f.*
from data as d
,table(flatten(input=>json:custom_fields)) f
ID
BRAND_ID
CREATED_AT
SEQ
KEY
PATH
INDEX
VALUE
THIS
151693
36000
2022-05-23 19:26:35.000
1
[0]
0
{ "id": 57866008, "value": false }
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
151693
36000
2022-05-23 19:26:35.000
1
[1]
1
{ "id": 360022282754, "value": "" }
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
151693
36000
2022-05-23 19:26:35.000
1
[2]
2
{ "id": 80814087, "value": "NC" }
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
So we can pull out know values (a manual PIVOT)
select
d.json:id::number as id
,d.json:brand_id::number as brand_id
,try_to_timestamp(d.json:created_at::text, 'yyyy-mmddThh:mi:ssZ') as created_at
,max(iff(f.value:id=80814087, f.value:value::text, null)) as v80814087
,max(iff(f.value:id=360022282754, f.value:value::text, null)) as v360022282754
,max(iff(f.value:id=57866008, f.value:value::text, null)) as v57866008
from data as d
,table(flatten(input=>json:custom_fields)) f
group by 1,2,3, f.seq
grouping by the f.seq means if you have many "rows" of input these will be kept apart, even if they share common values for 1,2,3
gives:
ID
BRAND_ID
CREATED_AT
V80814087
V360022282754
V57866008
151693
36000
2022-05-23 19:26:35.000
NC
<empty string>
false
Now if you do not know the names of the values, there is no way short of dynamic SQL and double parsing to turns rows into columns.
I ended up doing the following, with 2 different CTEs (CTE and UCF):
Used to_array to gather my custom fields
Unioned the custom fields together twice; once for the id of the field and once for the value (and used combinations of substring, position and replace to clean up data as needed (same setup for all fields)
Joined the resulting data to a Custom Fields Table (contains the id and a name) to include the name of the custom field in my result set.
WITH UCF AS (--Union Gathered Array into 2 fields (an id field and a value field)
WITH CTE AS( ---Gather array of custom fields
SELECT v:id as id,
to_array(v:custom_fields) as cf
,cf[0] as f0,cf[1] as f1,cf[2] as f2
FROM ZD_TICKETS)
SELECT id,
substring(f0,7,position(',',f0)-7) AS cf_id, REPLACE(substring(f0,position('value":',f0)+8,position('"',f0,position('value":',f0)+8)),'"}') AS cf_value
FROM CTE c
WHERE f0 not like '%null%'
UNION
SELECT id,
substring(f1,7,position(',',f1)-7) AS cf_id,
REPLACE(substring(f1,position('value":',f1)+8,position('"',f1,position('value":',f1)+8)),'"}') AS cf_value
FROM CTE c
WHERE f1 not like '%null%'
-- field 3
UNION
SELECT id,
substring(f2,7,position(',',f2)-7) AS cf_id,
REPLACE(substring(f2,position('value":',f2)+8,position('"',f2,position('value":',f2)+8)),'"}') AS cf_value
FROM CTE c
WHERE f2 not like '%null%' --this removes records where the value is null
)
SELECT UCF.*,CFD.name FROM UCF
LEFT OUTER JOIN "FLBUSINESS_DB"."STAGING"."FILE_ZD_CUSTOM_FIELD_IDS" CFD
ON CFD.id=UCF.cf_id
WHERE cf_value<>'' --this removes records where the value is blank
The result set looks like:

N1QL get data from a bucket using the ID from a document. I can't see my results

I want to get data from another bucket using the ID inside a document
The query I'm trying to execute:
SELECT meta().id,`docId`,`createdAt`,`updatedAt`,`data` FROM sales AS ccc
LET contact = (SELECT meta().id,`docId`,`createdAt`,`updatedAt`,`data` FROM contacts AS bbb USE KEYS [ccc.contactId])
WHERE status = 'active' LIMIT 10 OFFSET 0
Source doc:
{
"status": "active",
"data": {
"billingAddress": null,
"contactId": "1b529239ea294da687559e1464a8c5a8",
"count": 1,
"currency": "USD"
}
}
The doc I want to get "1b529239ea294da687559e1464a8c5a8",
{
"id" : "1b529239ea294da687559e1464a8c5a8"
"status": "active",
"data" : {
"name": "SpaceX", "location": {}
}
}
The query response I'm trying to get:
{
"status": "active",
"data": {
"billingAddress": null,
"contactId": "1b529239ea294da687559e1464a8c5a8",
"contact": { "data": { "name": "SpaceX"} }, // *trying to the contact in a contact var by selecting the name*
"count": 1,
"currency": "USD"
}
}
You have never projected contact
SELECT ccc.*, OBJECT_CONCAT(ccc.data,concat) AS data
FROM sales AS ccc
LET contact = (SELECT {bbb.data.name} AS data
FROM contacts AS bbb USE KEYS ccc.contactId)[0]
WHERE status = 'active'
LIMIT 10 OFFSET 0

How to update object fields inside nested array and dynamically set a field value based on some inputs

I have been working on a Mongo database for a while. The database has some visits that have this form:
[
{
"isPaid": false,
"_id": "5c12bc3dcea46f9d3658ca98",
"clientId": "5c117f2d1d6b9f9182182ae4",
"came_by": "Twitter Ad",
"reason": "Some reason",
"doctor_services": "Some service",
"doctor": "Dr. Michael",
"service": "Special service",
"payments": [
{
"date": "2018-12-13T21:23:05.424Z",
"_id": "5c12cdb9b236c59e75fe8190",
"sum": 345,
"currency": "$",
"note": "First Payment"
},
{
"date": "2018-12-13T21:23:07.954Z",
"_id": "5c12cdbbb236c59e75fe8191",
"sum": 100,
"currency": "$",
"note": "Second payment"
},
{
"date": "2018-12-13T21:23:16.767Z",
"_id": "5c12cdc4b236c59e75fe8192",
"sum": 5,
"currency": "$",
"note": "Third Payment"
}
],
"price": 500,
"createdAt": "2018-12-13T20:08:29.425Z",
"updatedAt": "2018-12-13T21:42:21.559Z",
}
]
I need to find some query to update some field of a single payment based on the _id of the visit and _id of the payment that is inside of nested array. Also when you update a payment's sum to some number so that the sum of all payments is greater than or equal to price the field isPaid is automatically updated to true.
I have tried some queries in mongoose to achieve the first part but none of them seem to work:
let paymentId = req.params.paymentId;
let updatedFields = req.body;
Visit.update(
{ "payments._id": paymentId },
{
$set: {
"visits.$": updatedFields
}
}
).exec((err, visit) => {
if (err) {
return res.status(500).send("Couldn't update payment");
}
return res.status(200).send("Updated payment");
});
As for the second part of the question I haven't really come up with anything so I would appreciate at least giving some direction on how to approach it.

Resources