Postgresql get elements of a JSON array - arrays

Let's say that we have the following JSON in Postgresql:
{ "name": "John", "items": [ { "item_name": "lettuce", "price": 2.65, "units": "no" }, { "item_name": "ketchup", "price": 1.51, "units": "litres" } ] }
The JSONs are stored in the following table:
create table testy_response_p (
ID serial NOT NULL PRIMARY KEY,
content_json json NOT NULL
)
insert into testy_response_p (content_json) values (
'{ "name": "John", "items": [ { "item_name": "lettuce", "price": 2.65, "units": "no" }, { "item_name": "ketchup", "price": 1.51, "units": "litres" } ] }'
)
Since the following can return either JSON or text (with -> and ->> respectively select content_json ->> 'items' from testy_response_p) I want to use a subquery in order to get elements of the array under items:
select *
from json_array_elements(
select content_json ->> 'items' from testy_response_p
)
All I get is an error but I don't know what I'm doing wrong. The output of the subquery is text. The final output is:
{ "item_name": "lettuce", "price": 2.65, "units": "no" }
{ "item_name": "ketchup", "price": 1.51, "units": "litres" }

You need to join to the function's result. You can't use the ->> operator because that returns text, not json and json_array_elements() only works with a JSON value for its input.
select p.id, e.*
from testy_response_p p
cross join lateral json_array_elements(p.content_json -> 'items') as e;
Online example: https://rextester.com/MFGEA29396

Related

I'm attempting to parse json data from zendesk using v: structure

With standard fields, like id, this works perfectly. But I am not finding a way to parse the custom fields where the structure is
"custom_fields": [
{
"id": 57852188,
"value": ""
},
{
"id": 57522467,
"value": ""
},
{
"id": 57522487,
"value": ""
}
]
The general format that I have been using is:
Select v:id,v:updatedat
from zd_tickets
updated data:
{
"id":151693,
"brand_id": 36000,
"created_at": "2022-0523T19:26:35Z",
"custom_fields": [
{ "id": 57866008, "value": false },
{ "id": 360022282754, "value": "" },
{ "id": 80814087, "value": "NC" } ],
"group_id": 36000770
}
If you want to select all repeating elements you will need to use FLATTEN, otherwise you can use standard notation. This is all documented here: https://docs.snowflake.com/en/user-guide/querying-semistructured.html#retrieving-a-single-instance-of-a-repeating-element
So using this CTE to access the data in a way that look like a table:
with data(json) as (
select parse_json(column1) from values
('{
"id":151693,
"brand_id": 36000,
"created_at": "2022-0523T19:26:35Z",
"custom_fields": [
{ "id": 57866008, "value": false },
{ "id": 360022282754, "value": "" },
{ "id": 80814087, "value": "NC" } ],
"group_id": 36000770
} ')
)
SQL to unpack the top level items, as you have shown you have working:
select
json:id::number as id
,json:brand_id::number as brand_id
,try_to_timestamp(json:created_at::text, 'yyyy-mmddThh:mi:ssZ') as created_at
,json:custom_fields as custom_fields
from data;
gives:
ID
BRAND_ID
CREATED_AT
CUSTOM_FIELDS
151693
36000
2022-05-23 19:26:35.000
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
So now how to tackle that json/array of custom_fields..
Well if you only ever have 3 values, and the order is always the same..
select
to_array(json:custom_fields) as custom_fields_a
,custom_fields_a[0] as field_0
,custom_fields_a[1] as field_1
,custom_fields_a[2] as field_2
from data;
gives:
CUSTOM_FIELDS_A
FIELD_0
FIELD_1
FIELD_2
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
{ "id": 57866008, "value": false }
{ "id": 360022282754, "value": "" }
{ "id": 80814087, "value": "NC" }
so we can use flatten to access those objects, which makes "more rows"
select
d.json:id::number as id
,d.json:brand_id::number as brand_id
,try_to_timestamp(d.json:created_at::text, 'yyyy-mmddThh:mi:ssZ') as created_at
,f.*
from data as d
,table(flatten(input=>json:custom_fields)) f
ID
BRAND_ID
CREATED_AT
SEQ
KEY
PATH
INDEX
VALUE
THIS
151693
36000
2022-05-23 19:26:35.000
1
[0]
0
{ "id": 57866008, "value": false }
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
151693
36000
2022-05-23 19:26:35.000
1
[1]
1
{ "id": 360022282754, "value": "" }
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
151693
36000
2022-05-23 19:26:35.000
1
[2]
2
{ "id": 80814087, "value": "NC" }
[ { "id": 57866008, "value": false }, { "id": 360022282754, "value": "" }, { "id": 80814087, "value": "NC" } ]
So we can pull out know values (a manual PIVOT)
select
d.json:id::number as id
,d.json:brand_id::number as brand_id
,try_to_timestamp(d.json:created_at::text, 'yyyy-mmddThh:mi:ssZ') as created_at
,max(iff(f.value:id=80814087, f.value:value::text, null)) as v80814087
,max(iff(f.value:id=360022282754, f.value:value::text, null)) as v360022282754
,max(iff(f.value:id=57866008, f.value:value::text, null)) as v57866008
from data as d
,table(flatten(input=>json:custom_fields)) f
group by 1,2,3, f.seq
grouping by the f.seq means if you have many "rows" of input these will be kept apart, even if they share common values for 1,2,3
gives:
ID
BRAND_ID
CREATED_AT
V80814087
V360022282754
V57866008
151693
36000
2022-05-23 19:26:35.000
NC
<empty string>
false
Now if you do not know the names of the values, there is no way short of dynamic SQL and double parsing to turns rows into columns.
I ended up doing the following, with 2 different CTEs (CTE and UCF):
Used to_array to gather my custom fields
Unioned the custom fields together twice; once for the id of the field and once for the value (and used combinations of substring, position and replace to clean up data as needed (same setup for all fields)
Joined the resulting data to a Custom Fields Table (contains the id and a name) to include the name of the custom field in my result set.
WITH UCF AS (--Union Gathered Array into 2 fields (an id field and a value field)
WITH CTE AS( ---Gather array of custom fields
SELECT v:id as id,
to_array(v:custom_fields) as cf
,cf[0] as f0,cf[1] as f1,cf[2] as f2
FROM ZD_TICKETS)
SELECT id,
substring(f0,7,position(',',f0)-7) AS cf_id, REPLACE(substring(f0,position('value":',f0)+8,position('"',f0,position('value":',f0)+8)),'"}') AS cf_value
FROM CTE c
WHERE f0 not like '%null%'
UNION
SELECT id,
substring(f1,7,position(',',f1)-7) AS cf_id,
REPLACE(substring(f1,position('value":',f1)+8,position('"',f1,position('value":',f1)+8)),'"}') AS cf_value
FROM CTE c
WHERE f1 not like '%null%'
-- field 3
UNION
SELECT id,
substring(f2,7,position(',',f2)-7) AS cf_id,
REPLACE(substring(f2,position('value":',f2)+8,position('"',f2,position('value":',f2)+8)),'"}') AS cf_value
FROM CTE c
WHERE f2 not like '%null%' --this removes records where the value is null
)
SELECT UCF.*,CFD.name FROM UCF
LEFT OUTER JOIN "FLBUSINESS_DB"."STAGING"."FILE_ZD_CUSTOM_FIELD_IDS" CFD
ON CFD.id=UCF.cf_id
WHERE cf_value<>'' --this removes records where the value is blank
The result set looks like:

Loading JSON data into snowpipe

we have below Valid JSON data which resides in S3 and we are trying load this data into snowflake table by snowpipe .
"Vendor": {
"string": "ABC"
},
"vmAddresses": [{
"Address": {
"string": "addr1"
},
"Category": {
"string": "order"
}
]
SELECT $1:Vendor.string::varchar,
$1:vmAddresses[0].Address.string,
object_keys($1:vmAddresses[0]),
object_pick($1:vmAddresses[0],'Address', 'Category')
FROM #S3://20210310194308.json
with OBJECT_KEYS we are able to get the keys but unable to get the corresponding value of it . the below format is what we are trying to get
{
"Address": "addr1",
"Category": "order"
}
Any help would be appreciated.
When I tried to validate your sample text trough parse_json and an online json formatter, both of them complained about invalid JSON. I corrected it, and run your SQL:
with json_data as (
select parse_json( '{ "Vendor": {"string": "ABC" }, "vmAddresses": [ { "Address": { "string": "addr1" }, "Category": { "string": "order" } } ] }' ) j)
select j:Vendor.string,
j:vmAddresses[0].Address.string,
object_keys(j:vmAddresses[0]),
object_pick(j:vmAddresses[0],'Address', 'Category')
from json_data;
And it worked as expected:
j:vmAddresses[0].Address.string <-- returns "addr1"
object_keys(j:vmAddresses[0]) <-- returns [ "Address", "Category" ]
j:vmAddresses[0] or object_pick(j:vmAddresses[0],'Address', 'Category') <-- returns
{"Address": { "string": "addr1" }, "Category": { "string": "order" } }
Which value are you trying to parse? Everything seems working.
Additional answers based on comment:
You can use object_construct to build the JSON after reading the values with the vmAddresses[0].Address.string notation:
with json_data as (
select parse_json( '{ "Vendor": {"string": "ABC" }, "vmAddresses": [ { "Address": { "string": "addr1" }, "Category": { "string": "order" } } ] }' ) j)
select OBJECT_CONSTRUCT( 'Address', j:vmAddresses[0].Address.string, 'Category', j:vmAddresses[0].Category.string )
from json_data;

How to retrieve all child nodes from JSON file

I have below JSON file, which is in the external stage, I'm trying to write a copy query into the table with the below query. But it's fetching a single record from the node "values" whereas I need to insert all child elements for the values node. I have loaded this file into a table with the variant datatype.
The query I'm using:
select record:batchId batchId, record:results[0].pageInfo.numberOfPages NoofPages, record:results[0].pageInfo.pageNumber pageNo,
record:results[0].pageInfo.pageSize PgSz, record:results[0].requestId requestId,record:results[0].showPopup showPopup,
record:results[0].values[0][0].columnId columnId,record:results[0].values[0][0].value val
from lease;
{
"batchId": "",
"results": [
{
"pageInfo": {
"numberOfPages": ,
"pageNumber": ,
"pageSize":
},
"requestId": "",
"showPopup": false,
"values": [
[
{
"columnId": ,
"value": ""
},
{
"columnId": ,
"value":
}
]
]
}
]
}
you need to use the LATERAL FLATTEN functions, something like this:
I created this table:
create table json_test (seq_no integer, json_text variant);
and then populated it with this JSON string:
insert into json_test(seq_no, json_text)
select 1, parse_json($${
"batchId": "a",
"results": [
{
"pageInfo": {
"numberOfPages": "1",
"pageNumber": "1",
"pageSize": "100000"
},
"requestId": "a",
"showPopup": false,
"values": [
[
{
"columnId": "4567",
"value": "2020-10-09T07:24:29.000Z"
},
{
"columnId": "4568",
"value": "2020-10-10T10:24:29.000Z"
}
]
]
}
]
}$$);
Then the following query:
select
json_text:batchId batchId
,json_text:results[0].pageInfo.numberOfPages numberOfPages
,json_text:results[0].pageInfo.pageNumber pageNumber
,json_text:results[0].pageInfo.pageSize pageSize
,json_text:results[0].requestId requestId
,json_text:results[0].showPopup showPopup
,f.value:columnId columnId
,f.value:value value
from json_test t
,lateral flatten(input => t.json_text:results[0]:values[0]) f;
gives these results - which I think is roughly what you are looking for:
BATCHID NUMBEROFPAGES PAGENUMBER PAGESIZE REQUESTID SHOWPOPUP COLUMNID VALUE
"a" "1" "1" "100000" "a" false "4567" "2020-10-09T07:24:29.000Z"
"a" "1" "1" "100000" "a" false "4568" "2020-10-10T10:24:29.000Z"

ambiguous column name 'VALUE'

Any idea to overcome ambiguous column with snowflake lateral flatten function error with below logic is much appreciated.
I'm trying to flatten the nested JSON data using the below query by selecting the value from variant column, However getting ambiguous column name 'VALUE' error with lateral flatten function. Can someone help me to achieve the desired output. Issue here is the JSON key name is coming as "value" and I couldn't get that data using lateral flatten. Desired output has been attached as image to this thread.
Sample JSON Data
{"issues": [
{
"expand": "a,b,c,d",
"fields": {
"customfield_10000": null,
"customfield_10001": null,
"customfield_10002": [
{
"id": "1234",
"self": "xxx",
"value": "Test"
}
],
},
"id": "123456",
"key": "K-123"
}
]}*
*select
a.value:id::number as ISSUE_ID,
a.value:key::varchar as ISSUE_KEY,
b.value:id::varchar as ROOT_CAUSE_ID,
**b.value:value::varchar as ROOT_CAUSE_VALUE**
from
abc.table_variant,
lateral flatten( input => payload_json:issues) as a,
lateral flatten( input => a.value:fields.customfield_10002) as b;*
Try
b.value:"value"::varchar
WITH CTE AS
(select parse_json('{"issues": [
{
"expand": "a,b,c,d",
"fields": {
"customfield_10000": null,
"customfield_10001": null,
"customfield_10002": [
{
"id": "1234",
"self": "xxx",
"value": "Test"
}
],
},
"id": "123456",
"key": "K-123"
}
]}')
as col)
select
a.value:id::number as ID,
a.value:key::varchar as KEY,
b.value:id::INT as customfield_10002,
b.value:value::varchar as customfield_10002_value
from cte,
lateral flatten(input => cte.col, path => 'issues') a,
lateral flatten(input => a.value:fields.customfield_10002) b;

How to write a SQL query in CosmosDB for a JSON document which has nested/multiple array

I need to write a SQL query in the CosmosDB query editor, that will fetch results from JSON documents stored in Collection, as per my requirement shown below
The example JSON
{
"id": "abcdabcd-1234-1234-1234-abcdabcdabcd",
"source": "Example",
"data": [
{
"Laptop": {
"New": "yes",
"Used": "no",
"backlight": "yes",
"warranty": "yes"
}
},
{
"Mobile": [
{
"order": 1,
"quantity": 2,
"price": 350,
"color": "Black",
"date": "07202019"
},
{
"order": 2,
"quantity": 1,
"price": 600,
"color": "White",
"date": "07202019"
}
]
},
{
"Accessories": [
{
"covers": "yes",
"cables": "few"
}
]
}
]
}
Requirement:
SELECT 'warranty' (Laptop), 'quantity' (Mobile), 'color' (Mobile), 'cables' (Accessories) for a specific 'date' (for eg: 07202019)
I've tried the following query
SELECT
c.data[0].Laptop.warranty,
c.data[1].Mobile[0].quantity,
c.data[1].Mobile[0].color,
c.data[2].Accessories[0].cables
FROM c
WHERE ARRAY_CONTAINS(c.data[1].Mobile, {date : '07202019'}, true)
Original Output from above query:
[
{
"warranty": "yes",
"quantity": 2,
"color": "Black",
"cables": "few"
}
]
But how can I get this Expected Output, that has all order details in the array 'Mobile':
[
{
"warranty": "yes",
"quantity": 2,
"color": "Black",
"cables": "few"
},
{
"warranty": "yes",
"quantity": 1,
"color": "White",
"cables": "few"
}
]
Since I wrote c.data[1].Mobile[0].quantity i.e 'Mobile[0]' which is hard-coded, only one entry is returned in the output (i.e. the first one), but I want to have all the entries in the array to be listed out
Please consider using JOIN operator in your sql:
SELECT DISTINCT
c.data[0].Laptop.warranty,
mobile.quantity,
mobile.color,
c.data[2].Accessories[0].cables
FROM c
JOIN data in c.data
JOIN mobile in data.Mobile
WHERE ARRAY_CONTAINS(data.Mobile, {date : '07202019'}, true)
Output:
Update Answer:
Your sql:
SELECT DISTINCT c.data[0].Laptop.warranty, mobile.quantity, mobile.color, accessories.cables FROM c
JOIN data in c.data JOIN mobile in data.Mobile
JOIN accessories in data.Accessories
WHERE ARRAY_CONTAINS(data.Mobile, {date : '07202019'}, true)
My advice:
I have to say that,actually, Cosmos DB JOIN operation is limited to the scope of a single document. What possible is you can join parent object with child objects under same document. Cross-document joins are NOT supported.However,your sql try to implement mutiple parallel join.In other words, Accessories and Mobile are hierarchical, not nested.
I suggest you using stored procedure to execute two sql,than put them together. Or you could implement above process in the code.
Please see this case:CosmosDB Join (SQL API)

Resources