I have a JSON where in between is an array. I already extracted all the other information in the json but i cant do it with the array in it. IT looks like this in BigQuery and it is a String JSON
{
"description":"Eminem.",
"eDate":{
"_seconds":1673668800,"_nanoseconds":409000000
},
"pId":"test-plan-1",
"Id":"test-p-1",
"startDate":{
"_seconds":1673636400,"_nanoseconds":957000000
},
"Categories":[
{
"description":"Eminem 123.",
"id":"cheap",
"name":"Cheap Ticket", "sEnd":{"_seconds":1767283200,"_nanoseconds":225000000},
"sStart":{"_seconds":1673272800,"_nanoseconds":330000000},
"tRate":0.19,
"uPrice":1.5
}
],
"title":"Apple",
"vId":"test-v-1"
}
The array starts by categories and end by uPrice
Im expecting that all keys with their values have their own column . The data from JSON and from the array
One possible way would be using JSON_QUERY_ARRAY with ARRAY_TO_STRING. JSON_QUERY_ARRAY returns ARRAY<STRING> which cannot be directly parsed again.
Below is the result of the following query.
WITH
json_dataset AS (
SELECT '{"description":"Eminem.","eDate":{"_seconds":1673668800,"_nanoseconds":409000000},"pId":"test-plan-1","Id":"test-p-1","startDate":{"_seconds":1673636400,"_nanoseconds":957000000},"Categories":[{"description":"Eminem 123.","id":"cheap","name":"Cheap Ticket", "sEnd":{"_seconds":1767283200,"_nanoseconds":225000000},"sStart":{"_seconds":1673272800,"_nanoseconds":330000000},"tRate":0.19,"uPrice":1.5}],"title":"Apple","vId":"test-v-1"}' as json_string
)
SELECT
json_string,
JSON_VALUE(
json_string,'$.description'
) AS json_value_col,
JSON_QUERY_ARRAY(
json_string,'$.Categories'
) AS json_query_array_col,
ARRAY_TO_STRING(
JSON_QUERY_ARRAY(
json_string,'$.Categories'
),
""
) AS json_array_string,
JSON_VALUE(
ARRAY_TO_STRING(JSON_QUERY_ARRAY(json_string,'$.Categories'),""),'$.description'
) AS target_example,
FROM json_dataset
;
Related
I want to split strings into columns.
My columns should be:
account_id, resource_type, resource_name
I have a JSON file source that I have been trying to parse via ADF data flow. That hasn't worked for me, hence I flattened the data and brought it into SQL Server (I am open to parsing values via ADF or SQL if anyone can show me how). Please check the JSON file at the bottom.
Use this code to query the data I am working with.
CREATE TABLE test.test2
(
resource_type nvarchar(max) NULL
)
INSERT INTO test.test2 ([resource_type])
VALUES
('account_id:224526257458,resource_type:buckets,resource_name:camp-stage-artifactory'),
('account_id:535533456241,resource_type:buckets,resource_name:tni-prod-diva-backups'),
('account_id:369798452057,resource_type:buckets,resource_name:369798452057-s3-manifests'),
('account_id:460085747812,resource_type:buckets,resource_name:vessel-incident-report-nonprod-accesslogs')
The output that I should be able to query in SQL Server should like this:
account_id
resource_type
resource_name
224526257458
buckets
camp-stage-artifactory
535533456241
buckets
tni-prod-diva-backups
and so forth.
Please help me out and ask for clarification if needed. Thanks in advance.
EDIT:
Source JSON Format:
{
"start_date": "2021-12-01 00:00:00+00:00",
"end_date": "2021-12-31 23:59:59+00:00",
"resource_type": "all",
"records": [
{
"directconnect_connections": [
"account_id:227148359287,resource_type:directconnect_connections,resource_name:'dxcon-fh40evn5'",
"account_id:401311080156,resource_type:directconnect_connections,resource_name:'dxcon-ffxgf6kh'",
"account_id:401311080156,resource_type:directconnect_connections,resource_name:'dxcon-fg5j5v6o'",
"account_id:227148359287,resource_type:directconnect_connections,resource_name:'dxcon-fgvfo1ej'"
]
},
{
"virtual_interfaces": [
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:'dxvif-fgvj25vt'",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:'dxvif-fgbw5gs0'",
"account_id:401311080156,resource_type:virtual_interfaces,resource_name:'dxvif-ffnosohr'",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:'dxvif-fg18bdhl'",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:'dxvif-ffmf6h64'",
"account_id:390251991779,resource_type:virtual_interfaces,resource_name:'dxvif-fgkxjhcj'",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:'dxvif-ffp6kl3f'"
]
}
]
}
Since you don't have a valid JSON string and not wanting to get in the business of string manipulation... perhaps this will help.
Select B.*
From test2 A
Cross Apply ( Select account_id = max(case when value like 'account_id:%' then stuff(value,1,11,'') end )
,resource_type = max(case when value like 'resource_type:%' then stuff(value,1,14,'') end )
,resource_name = max(case when value like 'resource_name:%' then stuff(value,1,14,'') end )
from string_split(resource_type,',')
)B
Results
account_id resource_type resource_name
224526257458 buckets camp-stage-artifactory
535533456241 buckets tni-prod-diva-backups
369798452057 buckets 369798452057-s3-manifests
460085747812 buckets vessel-incident-report-nonprod-accesslogs
Unfortunately, the values inside the arrays are not valid JSON. You can patch them up by adding {} to the beginning/end, and adding " on either side of : and ,.
DECLARE #json nvarchar(max) = N'{
"start_date": "2021-12-01 00:00:00+00:00",
"end_date": "2021-12-31 23:59:59+00:00",
"resource_type": "all",
"records": [
{
"directconnect_connections": [
"account_id:227148359287,resource_type:directconnect_connections,resource_name:''dxcon-fh40evn5''",
"account_id:401311080156,resource_type:directconnect_connections,resource_name:''dxcon-ffxgf6kh''",
"account_id:401311080156,resource_type:directconnect_connections,resource_name:''dxcon-fg5j5v6o''",
"account_id:227148359287,resource_type:directconnect_connections,resource_name:''dxcon-fgvfo1ej''"
]
},
{
"virtual_interfaces": [
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:''dxvif-fgvj25vt''",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:''dxvif-fgbw5gs0''",
"account_id:401311080156,resource_type:virtual_interfaces,resource_name:''dxvif-ffnosohr''",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:''dxvif-fg18bdhl''",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:''dxvif-ffmf6h64''",
"account_id:390251991779,resource_type:virtual_interfaces,resource_name:''dxvif-fgkxjhcj''",
"account_id:227148359287,resource_type:virtual_interfaces,resource_name:''dxvif-ffp6kl3f''"
]
}
]
}';
SELECT
j4.account_id,
j4.resource_type,
TRIM('''' FROM j4.resource_name) resource_name
FROM OPENJSON(#json, '$.records') j1
CROSS APPLY OPENJSON(j1.value) j2
CROSS APPLY OPENJSON(j2.value) j3
CROSS APPLY OPENJSON('{"' + REPLACE(REPLACE(j3.value, ':', '":"'), ',', '","') + '"}')
WITH (
account_id bigint,
resource_type varchar(20),
resource_name varchar(100)
) j4;
db<>fiddle
The first three calls to OPENJSON have no schema, so the resultset is three columns: key value and type. In the case of arrays (j1 and j3), key is the index into the array. In the case of single objects (j2), key is each property name.
I'm trying to load JSON data to BigQuery. The excerpt of my data causing problems looks like this:
[{"Value":"123","Code":"A"},{"Value":"000","Code":"B"}]
{"Value":"456","Code":"A"}
[{"Value":"123","Code":"A"},{"Value":"789","Code":"C"},{"Value":"000","Code":"B"}]
{"Value":"Z","Code":"A"}
I have defined the schema for this field to be:
{
"fields": [
{
"mode": "NULLABLE",
"name": "Code",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "Value",
"type": "STRING"
}
],
"mode": "REPEATED",
"name": "Properties",
"type": "RECORD"
}
But I'm having trouble successfully extracting the string and array values into one repeated field. This SQL will successfully extract the string values:
JSON_EXTRACT_SCALAR(json_string,'$.Properties.Code') as Code,
JSON_EXTRACT_SCALAR(json_string,'$.Properties.Value') as Value
And this SQL will successfully extract the array values:
ARRAY(
SELECT
STRUCT(
JSON_EXTRACT_SCALAR(Properties_Array,'$.Code') AS Code,
JSON_EXTRACT_SCALAR(Properties_Array,'$.Value') AS Value
)
FROM UNNEST(JSON_EXTRACT_ARRAY(json_string,'$.Properties')) Properties_Array)
AS Properties
I am trying to find a way to have BigQuery to read this string as a one element array instead of preprocessing the data. Is this possible in #StandardSQL?
Below example is for BigQuery Standard SQL
#standardSQL
WITH `project.dataset.table` as (
SELECT '{"Properties":[{"Value":"123","Code":"A"},{"Value":"000","Code":"B"}]}' json_string UNION ALL
SELECT '{"Properties":{"Value":"456","Code":"A"}}' UNION ALL
SELECT '{"Properties":[{"Value":"123","Code":"A"},{"Value":"789","Code":"C"},{"Value":"000","Code":"B"}]}' UNION ALL
SELECT '{"Properties": {"Value":"Z","Code":"A"}}'
)
SELECT json_string,
ARRAY(
SELECT STRUCT(
JSON_EXTRACT_SCALAR(Properties,'$.Code') AS Code,
JSON_EXTRACT_SCALAR(Properties,'$.Value') AS Value
)
FROM UNNEST(IFNULL(
JSON_EXTRACT_ARRAY(json_string,'$.Properties'),
[JSON_EXTRACT(json_string,'$.Properties')])) Properties
) AS Properties
FROM `project.dataset.table`
with output
I have a table with a column which contains a json-object, the value type is always a string.
I need 2 kind of information:
a list of the json keys
convert the json in an array of key-value pairs
This is what I got so far, which is working:
CREATE TEMP FUNCTION jsonObjectKeys(input STRING)
RETURNS Array<String>
LANGUAGE js AS """
return Object.keys(JSON.parse(input));
""";
CREATE TEMP FUNCTION jsonToKeyValueArray(input STRING)
RETURNS Array<Struct<key String, value String>>
LANGUAGE js AS """
let json = JSON.parse(input);
return Object.keys(json).map(e => {
return { "key" : e, "value" : json[e] }
});
""";
WITH input AS (
SELECT "{\"key1\": \"value1\", \"key2\": \"value2\"}" AS json_column
UNION ALL
SELECT "{\"key1\": \"value1\", \"key3\": \"value3\"}" AS json_column
UNION ALL
SELECT "{\"key5\": \"value5\"}" AS json_column
)
SELECT
json_column,
jsonObjectKeys(json_column) AS keys,
jsonToKeyValueArray(json_column) AS key_value
FROM input
The problem is that FUNCTION is not the best in term of compute optimization, so I'm trying to understand if there is a way to use plain SQL to achieve these 2 needs (or 1 of them at least) using only SQL w/o functions.
Below is for BigQuery Standard SQL
#standardsql
select
json_column,
array(select trim(split(kv, ':')[offset(0)]) from t.kv kv) as keys,
array(
select as struct
trim(split(kv, ':')[offset(0)]) as key,
trim(split(kv, ':')[offset(1)]) as value
from t.kv kv
) as key_value
from input,
unnest([struct(split(translate(json_column, '{}"', '')) as kv)]) t
If to apply to sample data from your question - output is
I have a query like (simplified):
SELECT
JSON_QUERY(r.SerializedData, '$.Values') AS [Values]
FROM
<TABLE> r
WHERE ...
The result is like this:
{ "2019":120, "20191":120, "201902":121, "201903":134, "201904":513 }
How can I remove the entries with a key length less then 6.
Result:
{ "201902":121, "201903":134, "201904":513 }
One possible solution is to parse the JSON and generate it again using string manipulations for keys with desired length:
Table:
CREATE TABLE Data (SerializedData nvarchar(max))
INSERT INTO Data (SerializedData)
VALUES (N'{"Values": { "2019":120, "20191":120, "201902":121, "201903":134, "201904":513 }}')
Statement (for SQL Server 2017+):
UPDATE Data
SET SerializedData = JSON_MODIFY(
SerializedData,
'$.Values',
JSON_QUERY(
(
SELECT CONCAT('{', STRING_AGG(CONCAT('"', [key] ,'":', [value]), ','), '}')
FROM OPENJSON(SerializedData, '$.Values') j
WHERE LEN([key]) >= 6
)
)
)
SELECT JSON_QUERY(d.SerializedData, '$.Values') AS [Values]
FROM Data d
Result:
Values
{"201902":121,"201903":134,"201904":513}
Notes:
It's important to note, that JSON_MODIFY() in lax mode deletes the specified key if the new value is NULL and the path points to a JSON object. But, in this specific case (JSON object with variable key names), I prefer the above solution.
I am trying to import data in the following format into a hive table
[
{
"identifier" : "id#1",
"dataA" : "dataA#1"
},
{
"identifier" : "id#2",
"dataA" : "dataA#2"
}
]
I have multiple files like this and I want each {} to form one row in the table. This is what I have tried:
CREATE EXTERNAL TABLE final_table(
identifier STRING,
dataA STRING
) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION "s3://bucket/path_in_bucket/"
This is not creating a single row for each {} though. I have also tried
CREATE EXTERNAL TABLE final_table(
rows ARRAY< STRUCT<
identifier: STRING,
dataA: STRING
>>
) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION "s3://bucket/path_in_bucket/"
but this is not work either. Is there some way of specifying that the input as an array with each record being an item in the array to the hive query? Any suggestions on what to do?
Here is what you need
Method 1: Adding name to the array
Data
{"data":[{"identifier" : "id#1","dataA" : "dataA#1"},{"identifier" : "id#2","dataA" : "dataA#2"}]}
SQL
SET hive.support.sql11.reserved.keywords=false;
CREATE EXTERNAL TABLE IF NOT EXISTS ramesh_test (
data array<
struct<
identifier:STRING,
dataA:STRING
>
>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 'my_location';
SELECT rows.identifier,
rows.dataA
FROM ramesh_test d
LATERAL VIEW EXPLODE(d.data) d1 AS rows ;
Output
Method 2 - No Changes to the data
Data
[{"identifier":"id#1","dataA":"dataA#1"},{"identifier":"id#2","dataA":"dataA#2"}]
SQL
CREATE EXTERNAL TABLE IF NOT EXISTS ramesh_raw_json (
json STRING
)
LOCATION 'my_location';
SELECT get_json_object (exp.json_object, '$.identifier') AS Identifier,
get_json_object (exp.json_object, '$.dataA') AS Identifier
FROM ( SELECT json_object
FROM ramesh_raw_json a
LATERAL VIEW EXPLODE (split(regexp_replace(regexp_replace(a.json,'\\}\\,\\{','\\}\\;\\{'),'\\[|\\]',''), '\\;')) json_exploded AS json_object ) exp;
Output
JSON records in data files must appear one per line, an empty line would produce a NULL record.
This json should work
{ "identifier" : "id#1", "dataA" : "dataA#1" },
{ "identifier" : "id#2", "dataA" : "dataA#2" }