Select json values from array in Oracle 19c table - arrays

Newer to working with JSON and newly upgraded Oracle 19c DB.
I'm receiving a JSON array back from an api and storing it in an Oracle 19c table column with IS JSON (STRICT) constraint.
[ {"key1":"valueA", "key2":"valueB"}, {"key1":"valueC", "key2":"valueD"} ]
I need to select values in column form:
KEY1 KEY2
valueA valueB
valueC valueD
This returns one row with null columns.
Select jt.*
From json_data,
json_table(myData, '$.[*]'
columns( key1, key2)) jt;
I can't seem to make the Oracle functions (json_table, json_query, json_value, ...) handle this without wrapping the array in an object.
{ "base":[ {"key1":"valueA", "key2":"valueB"}, {"key1":"valueC", "key2":"valueD"} ] }
Then this query works:
Select jt.*
From json_data,
json_table(myData, '$.base[*]'
columns( key1, key2)) jt;
Is there a shortcoming with the Oracle functions or what am I doing wrong?

Select jt.*
From json_data,
json_table(myData, '$[*]'
columns( key1, key2)) jt;
Full test case with results:
with json_data(myData) as (
select '[ {"key1":"valueA", "key2":"valueB"}, {"key1":"valueC", "key2":"valueD"} ]' from dual
)
Select jt.*
From json_data,
json_table(myData, '$[*]'
columns( key1, key2)) jt;
KEY1 KEY2
-------------------- --------------------
valueA valueB
valueC valueD

You want $[*] not $.[*]
SELECT jt.*
FROM json_data
CROSS APPLY json_table(
myData,
'$[*]'
columns(
key1,
key2
)
) jt;
Which for the sample data:
CREATE TABLE json_data ( myData VARCHAR2(2000) CHECK( myData IS JSON(STRICT) ) );
INSERT INTO json_data ( myData )
VALUES ( '[ {"key1":"valueA", "key2":"valueB"}, {"key1":"valueC", "key2":"valueD"} ]' );
Outputs:
KEY1
KEY2
valueA
valueB
valueC
valueD
db<>fiddle here

Related

Snowflake json flatten from varchar data type

Need to flatten json data, from varchar/string data type column in snowflake
create table testjson1 (name varchar(200))
select name:batchNumber from testjson1, lateral flatten( input => name )
insert into testjson1 select parse_json('{"batchNumber" : "B12345", "productCode" : "P1234"}')
insert into testjson1 select ('2020-09-13T14:00:00.0000000')
select name:batchNumber from testjson1
Key point here is to cast/parse varchar column, here by using TRY_PARSE_JSON:
Option 1:
select t.*, f.PATH, f.VALUE
from testjson1 t, lateral flatten(input => TRY_PARSE_JSON(name), OUTER=>true) f;
/*
NAME PATH VALUE
{"batchNumber":"G35H9","productCode":"AHF7124"} batchNumber "G35H9"
{"batchNumber":"G35H9","productCode":"AHF7124"} productCode "AHF7124"
2020-09-13T14:00:00.0000000 NULL NULL
*/
Option 2:
select *, TRY_PARSE_JSON(name):batchNumber::string AS result from testjson1
/*
NAME RESULT
{"batchNumber":"G35H9","productCode":"AHF7124"} G35H9
2020-09-13T14:00:00.0000000 NULL
*/

Querying json that starts with an array

I have a JSON that starts with an array and I don't manage to query it.
JSON is in this format:
[
{"#id":1,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP",...}]}
},
{"#id":2..
}
]
Any suggestions on how to query this?
If I try to use lateral flatten I don't know what key to use:
select
'???'.Value:#id::string as id
from tabl1
,lateral flatten (tabl1_GB_RECORD:???) as gb_record
Your SQL was close but not complete, the following will give you #id values
with tbl1 (v) as (
select parse_json('
[
{"#id":1,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP"}]}
},
{"#id":2
}
]')
)
select t.value:"#id" id from tbl1
, lateral flatten (input => v) as t
Result:
id
___
1
2
Let me know if you have any other questions
You leverage the field that you want to flatten when the json begins with an array. Something along these lines:
WITH x AS (
SELECT parse_json('[
{"#id":1,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP"}]}
},
{"#id":2,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP"}]}
}
]') as json_data
)
SELECT y.value,
y.value:"#id"::number as id,
y.value:field1::string as field1,
y.value:"#field2"::variant as field2,
y.value:field3::variant as field3,
y.value:"#field2":name::varchar as name
FROM x,
LATERAL FLATTEN (input=>json_data) y;

Select distinct values across columns

I'm trying to create a query that returns the distinct values across 4 columns in a single row. My where clause will limit the depth of the results to just one row but keep the data separated into columns. For example
SELECT color1, color2, color3, color4 FROM art WHERE id = 1
would return (in columns)
|Red |Blue |Blue |Green |
But I want only the distinct values
|Red |Blue |Green | |
Is there such a thing as using an unpivot and then pivot back to columns?
One option is to UNPIVOT your data and then PIVOT via a dense_rank()
Example
Declare #YourTable Table ([ID] varchar(50),[Color1] varchar(50),[Color2] varchar(50),[Color3] varchar(50),[Color4] varchar(50))
Insert Into #YourTable Values
(1,'Red','Blue','Blue','Green')
Select *
From (
Select A.ID
,B.Item
,Col = concat('Color',dense_rank() over (partition by ID order by Item) )
from #YourTable A
Cross Apply ( values (color1)
,(color2)
,(color3)
,(color4)
) B(Item)
Where ID=1
) src
Pivot (max(item) for Col in (Color1,Color2,Color3,Color4) ) pvt
Returns
ID Color1 Color2 Color3 Color4
1 Blue Green Red NULL
Note: The Where ID=1 is optional

Extraction all values between special characters SQL

I have the following values in the SQL Server table:
But I need to build query from which output look like this:
I know that I should probably use combination of substring and charindex but I have no idea how to do it.
Could you please help me how the query should like?
Thank you!
Try the following, it may work.
SELECT
offerId,
cTypes
FROM yourTable AS mt
CROSS APPLY
EXPLODE(mt.contractTypes) AS dp(cTypes);
You can use string_split function :
select t.offerid, trim(translate(tt.value, '[]"', ' ')) as contractTypes
from table t cross apply
string_split(t.contractTypes, ',') tt(value);
The data in each row in the contractTypes column is a valid JSON array, so you may use OPENJSON() with explicit schema (result is a table with columns defined in the WITH clause) to parse this array and get the expected results:
Table:
CREATE TABLE Data (
offerId int,
contractTypes varchar(1000)
)
INSERT INTO Data
(offerId, contractTypes)
VALUES
(1, '[ "Hlavni pracovni pomer" ]'),
(2, '[ "ÖCVS", "Staz", "Prahovne" ]')
Table:
SELECT d.offerId, j.contractTypes
FROM Data d
OUTER APPLY OPENJSON(d.contractTypes) WITH (contractTypes varchar(100) '$') j
Result:
offerId contractTypes
1 Hlavni pracovni pomer
2 ÖCVS
2 Staz
2 Prahovne
As an additional option, if you want to return the position of the contract type in the contractTypes array, you may use OPENJSON() with default schema (result is a table with columns key, value and type and the value in the key column is the 0-based index of the element in the array):
SELECT
d.offerId,
CONVERT(int, j.[key]) + 1 AS contractId,
j.[value] AS contractType
FROM Data d
OUTER APPLY OPENJSON(d.contractTypes) j
ORDER BY CONVERT(int, j.[key])
Result:
offerId contractId contractType
1 1 Hlavni pracovni pomer
2 1 ÖCVS
2 2 Staz
2 3 Prahovne

How to extract elements of a JSONB array?

Running PostgresSQL v10.5.
In my table table_a that has a column metadata which is of type jsonb.
It has a JSON array as one of it's keys array_key with value something like this:
[{"key1":"value11", "key2":"value21", "key3":"value31"},
{"key1":"value21", "key2":"value22", "key3":"value23"}]
This is how I can query this key
SELECT metadata->>'array_key' from table_a
This gives me the entire array. Is there any way I can query only selected keys and perhaps format them?
The type of the array is text i.e pg_typeof(metadata->>'array_key') is text
An ideal output would be
"value11, value13", "value21, value23"
Use jsonb_array_elements() to get elements of the array (as value) which can be filtered by keys:
select value->>'key1' as key1, value->>'key3' as key3
from table_a
cross join jsonb_array_elements(metadata->'array_key');
key1 | key3
---------+---------
value11 | value31
value21 | value23
(2 rows)
Use an aggregate to get the output as a single value for each row, e.g.:
select string_agg(concat_ws(', ', value->>'key1', value->>'key3'), '; ')
from table_a
cross join jsonb_array_elements(metadata->'array_key')
group by id;
string_agg
------------------------------------
value11, value31; value21, value23
(1 row)
Working example in rextester.

Resources